Generative AI Statement

2026-03-20
#special-page

Overview / TL;DR

Generative AI, including but not limited to: large language models (LLMs), text-to-image models, text-to-video models, or music generation models, was not, and will not be, used for creation of any content served on this website. No generative AI was, or will ever be, involved in any step during the creation of all content served on this website.

What this means

  • Code in repositories hosted on cgit.chrisoft.org is designed and written entirely by human beings (usually just me).
  • Articles in my blog are written word-by-word by me. This means no LLM-aided research or rephrasing, among other potential LLM usages. Same applies to Notekins posts.
  • Music in music library is produced by me. No generative models will be used for either creating wave data, or MIDI data, or idea suggestions during the production.
  • Images hosted on the website are either created by me using an image acquisition device, or some other kind of digitization device, or by a fellow human in similar means. Image generation models will never be used to create or alter any of the images.
    • Exception: Machine learning based image denoise and upscaling tools not based on generative transformer models may be used.
  • Despite making the conscious choice of avoiding tools based on generative AI, this alone does not automatically guarantee a “GenAI-free” status, as service providers are trying their best to shove “AI” down their user’s throat. Best effort will be made to try to spot and discard undecleared “AI-generated content” that sneaks into conventional tools, e.g. search engine results.

What this DOES NOT mean

  • This does NOT mean I do not use generative AI under any occasion. I do use locally hosted generative AI models from time to time (and prior to self-hosting, free online services), mostly for evaluation purposes.
  • This does NOT mean I am close-minded towards any potential future advancements of the technology.
  • This does NOT mean only “GenAI-free” external sites can be linked on this website. Overtly AI-generated content will obviously not be linked, as I will try to find an alternative. However, for example, if someone decides they can’t write or don’t want to bother writing properly and uses generative AI as a writing aid, their stuff may still be linked on this website if I’ve read it and consider it either factually correct and / or helpful to illustrate a point.

Rationale

  • ML inference of these large scale models consumes tremendous amount of energy, along with other resources. Let alone the training process. [1] [2]
  • Too many business executives see generative AI as a replacement for human labor, rather than a tool that empowers their workforce. [3]
  • I am of the opinion that the mind behind a piece of art is an essential part of the art. AI imitation of art completely lacks this component and therefore, in my opinion, cannot be called art.[4]
  • More often than not, these models are trained over source material obtained in dubious means. Developer of these models offload the due diligence to the end users, asking the user to ensure the legality of the generated materials while attempting to rid themselves of any liability caused by the generated material. [5] [6]
  • If proper precaution is taken, generative AI can serve as an effective teaching / learning tool. I however, still enjoy the process of figuring things out myself more.
  • Some financial activities surrounding this industry have a fraudulent characteristic to them. [7]
  • Unconstrained “AI infrastructure” buildout has driven price of electronics through the roof, further limiting access to actual compute for the average person. [8] [9]
  • Bad actors have started using this technology for malevolent purposes, e.g. disinformation, scams, and mass surveillance. [10] [11] [12]
  • The “just a tool” crowd often dismiss all previous points without sufficient justification.

Rambles (Part One)

This is section is more or less a English version of the short essay I wrote last July. It was my response to a writing task used in the infamous National College Entrance Examination of China. This is not a one-to-one translation of the original Chinese version, part of it was reworded for a more general audience. The task was “Will we have fewer and fewer problems / questions now that the Internet and artificial intelligence can give quick answers to more and more questions?”.

(Essay starts below.)

Nope.

(EOF)

Just kidding. I have much more to say about this, especially since I work in a somewhat related area.

There is this snarky remark that goes like “the more you know, the less you know.” It’s been attributed to various famous people, although none of them probably said these exact words. Despite the snark, there’s a kernel of truth to it, maybe in a less snarky form “the more you know, the more you realize you don’t know”. So no, my answer to the question in the task does not change. Although do note that the Internet / artificial intelligence never appeared in this remark. So what am I even rambling about here? I guess just the Internet and artificial intelligence in general.

If the general public is surveyed, most will probably have the impression that artificial intelligence was developed much later than the Internet. That perception is anything but true: while the field of artificial intelligence was experiencing its first crisis (the so-called “AI winter” that happened in the mid-1970s), the modern concept of “Internet” didn’t even exist. In fact, the protocols used by modern networking were still in development at the time. The mathematical foundations of many common machine learning principles were laid down since the 1960s, although the lack of high performance compute held back the application of much of it until the turn of the century. Popularization of TFLOPs-capable hardware in the early 2010s rapidly accelerated research and raised public awareness of the modern artificial intelligence concept.

Before I incoherently rant on, it’s probably useful to note my professional background. I have a background in computer science, although my specific area of specialty has nothing to do with either networking or artificial intelligence. In fact, I consciously avoided all research areas that are strongly tied to artificial intelligence when I was picking mine. Reason being I’ve always had a cautious skepticism towards AI, even the explosive development of generative AI did not change my opinion. In fact I’m very much critical of how every single tech company is trying to shove AI down my throat right now. I regard that as nothing but a investor-appeasing gimmick. I also think the current hype around the artificial intelligence industry has been artificially inflated. Although none of this matters: I am, in the meantime, aware that just like how we still have the Internet after the dotcom bubble burst in the early 2000s, artificial intelligence (or even just specifically generative AI) will still be around even if the AI bubble bursts some time in the future. We will never live in a world without generative AI again[13].

As for why generative AI has not changed my opinion, I believe the reason is generative AI does not have the weld the ability to create (as in the word “creativity”). Early generative AI models’ constant silly mistakes in basic mathematics have proven that, as probabilistic models, generative AI lacks the capability of logical thought. Although newer models have made improvements in this area, they will still expose themselves if one asks them about concepts that never appeared in their training data. Similarly, this also applies to models that are trained to imitate art.

As a child of capitalism, those tech giants must be constantly drooling over generative AI, right? Not necessarily so. I’ve heard from multiple (presumably butthurt) executives complain about “all benchmarks in the industry, even the idea of AGI (artificial general intelligence), are nothing but meaningless gimmicks”. If that’s just an executive trying desperately to calm down their investors, what about the CEO of Microsoft, which just sunk an insane amount of resources into generative AI, lamenting that “AI has yet to make meaningful contributions to economic growth,” and that “… self-claimed AGI milestone is just nonsensical benchmark hacking”.[14] What’s up with that? Despite the admission, he reaffirmed that Microsoft’s commitment to AI will continue. Did he actually see the light at the end of the tunnel, or is it just another case of sunk cost fallacy?

With all that said, as a tool, generative AI should be treated as any other tools, that is, in a fair way. We shouldn’t be spreading misinformation like electric lamps are bad for the eyes because of the reluctance of switching from gas lighting to electric lighting, nor should we do the “everything is a nail when you hold a hammer” thing. A proper cost-benefit analysis is an absolute must before we can effectively use this tool. Considering the general public is more or less familiar with the key benefits of generative AI tools, such as increasing productivity (although this is not without controversy, one example being this article claiming usage of AI assistant will cause the accumulation of “cognitive debt”). In the remainder of this article I’ll focus on a few problems with current tools built upon generative AI, especially ones that, in my opinion, the public needs to increase its awareness of.

First one is being overly enthusiastic to the user. Back when LLMs were making silly mistakes like claiming 45 is a prime number, people realized LLMs should not appear to be too assertive in their outputs. The situation improved after a few iterations of new models, but with these new models brought new issues: they will now go to the other extreme, affirming every commands of the user without pushback. One classical example of this is the now infamous response “Great question!” and “You are absolutely right!”. For commercial models, such behavior might even be desirable considering the potential user appeal. This however makes it incredibly easy for the user interacting with the LLM to have their preconceived beliefs reinforced, trapping the user into a new form of echo chamber.

Prompt injection is another pretty big issue. This has always been a issue since the early days of generative AI (“ignore all previous instructions …”), and have even become a popular meme, but its true danger never came under public scrutiny until quite recently. Be it injecting invisible texts into academic papers to mislead reviewers using LLM to “help” them with the peer reviewing process[15], or people demonstrating the possibility to manipulate the output of LLM-based email-summarizing tools[16], the public must be educated to defend itself from malicious attempts of controlling the output from an LLM. What is concerning is that, the target demographics of these “AI-summarization” tools are probably the least likely to be aware of these risks. Unfortunately, I can’t think of a truly effective solution beyond somehow compelling the providers of these tools to add a warning message in big red letters, which these companies, desperately hoping for more users, will definitely not do willingly.

That’s about all that I want to specifically bring up for now. Of course the controversy surrounding generative AI goes well beyond these, like whether the creation of a generative AI model can be called art, copyright issues of training data, as well as resource consumption of associated infrastructures, to list a few. These have been discussed extensively in other occasions, and I personally don’t have much unique opinions to add here.

I want to end this essay with a quote from Dune, the science fiction by Frank Herbert:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

Rambles (Part Two)

Well, guess what. I do have things to add.

Most of these apply only to one specific form of generative AI (LLMs). But some of them apply to other forms as well.

Vibe-coding backfires.

Nvidia and Microslop. Two companies whose executives have made proud proclamations that they are fully embracing so called “vibe-coding”. They are now suffering the consequences. [17] [18]

To be fair to these companies, they never admitted in their statements that these problems were caused by “vibe-coding”. But when Amazon joined them to become another big tech company hit with a major issue recently, their internal communication was seen by reporters and points the cause of the issue directly at “vibe-coding”.[19]

Well, with the now largely obsolete auto-complete style of “vibe-coding”, at least the programmer still has to see the code (whether to actually review it is a choice that they must make). Now with the newfangled “agentic coding assistants”, the programmer doesn’t have to even see the code any more. Fun times.

Patchwork is not the solution.

From time to time, people figure out ways to trick LLMs to return seemingly absurd responses. From the aforementioned “is 45 a prime?” to the latest car wash problem. The absurd answers will ultimately be “patched out”, most likely by introducing the trick question into the training data, basically coaching the AI using the correct answer. How do I know that? Because guess what, before long another trick question will come out of nowhere and LLMs will give absurd answers to that question too.

This is why I still make the claim that generative AI lacks the capability of logical thought, despite the new marketing term “thinking” used by many model developers. Ultimately, its’s still a predictive text model, just unfathomably huge. But that doesn’t bestow upon it the ability to think.

Some argue the nonsensical response to the car wash problem is caused by underlying biases of the model (walking good for the environment, driving bad). This could be correct. And it brings us to my next point.

Hidden biases.

Models have built-in biases and often reflect the agenda of the entity that trained them. Ever wondered why El*n M*sk’s AI will kiss his ass to death, or why Chinese models will never have a discussion on certain topics with you? LLMs have these biases that you may or may not be aware of. And when you’re not aware of them, relying on their output becomes dangerous. YOU will be indirectly influenced by these entities behind the models and help them advance their agenda, probably without even you noticing it.

This applies not only to LLMs. Text-to-image models and others alike are more than willing to show off their build-in biases too.

Dangerous when “right”.

A better title for this section would be “Dangerous when almost right”, but that doesn’t come off as nicely.

While generative AI models continues to improve, they become less and less probable to make obvious mistakes. But it doesn’t stop them from making mistakes altogether. This is where the danger lies: as LLMs produce outputs that look ostensibly correct, people become more likely to accept it without doing their due diligence, which is absolutely critical when someone uses the output from any kind of generative AI.

Judging by how people interact with generative AI in general, I find this deeply disturbing. [20] [21] To be clear, the 41% distrust figure reported in the survey isn’t nearly good enough, as there are still over 30% of people that will trust “AI” to provide them with information. Worse still, the figure is higher in younger generations.

I dread to think about the potential consequences of more than half of all the people eating up output from LLMs without scrutiny.

So much for so little.

It frustrates me so much when the example of people asking “AI” for a recipe of something comes up. Like what on earth is even happening, you can do that by doing a web search, you know? I guess people nowadays are just too lazy to come up with search keywords anymore?

Imagine the resources wasted by people asking “AI” for recipes alone.

“Agentic AI” is the newest foe.

While LLM-based coding tools erasing people’s entire disk barely makes the news these days, now it is no longer exclusive to the small demographic of programmers thanks to the popularization of autonomous agents like OpenClaw.

Now “AI” will be able to delete your curious auntie’s emails and precious family photos. Yippee!

Who asked?

No Microslop, nobody asked you to stick the word “Copilot” everywhere in your software. People are literally flocking to frigging Linux because what you are doing to what was once your most important product. Linux!!! 2026 is still not the year of the Linux desktop, by the way.

For any other companies listening, nobody asked you either. It’s a shame for me to admit Apple is the best when it comes to restraint over the urge to splatter AI vomit all over the place among these big companies.

If you want to snarkily redirect that question to me, fair enough. But I’m just proud to own a wholly human-produced website in the year of our Lord 2026.

“Just a tool?”

So yes, here I’m directly address the “just a tool” crowd again. I never denied that generative AI is a tool. In fact, in the first section, I said it’s exactly a tool and deserves fair evaluation just like any other tool before adoption or rejection.

I have in fact, done the evaluation already. The societal and environmental implications of these things makes it impossible for me to adopt it without hurting my own conscience.

How this crowd is able to lightly dismiss all this with a “just” baffles me.

“Not close-minded?”

You might be wondering how am I not close-minded towards the technology if I’ve said “no generative AI was, or will ever be, involved in any step during the creation of all content served on this website.”

No, this is not contradictory. If the situation improve and I decide to adopt generative AI, it will still not be used for anything hosted on this website. It’s my principle that I must be personally responsible for the entire creation process of something I care about. And this website is one of the things that I hold the dearest to myself.



[1]: “Energy demand from AI”. International Energy Agency.
[2]: “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.” MIT Technology Review.
[3]: “AI impacting labor market ‘like a tsunami’ as layoff fears mount”. CNBC.
[4]: AI proponents might counter this with an argument along the line of “But I painstakingly crafted the prompt for the art!” While I agree that this makes the discussion more nuanced, I’d like to remind people who make this argument that prompting is not very different from commissioning the art (but from a probabilistic model). Now consider throughout the entire history of art, how often does a commissioner get to take credit for a piece of art.
[5]: “The Unbelievable Scale of AI’s Pirated-Books Problem”. The Atlantic.
[6]: “Copyrights, Professional Perspective - IP Issues With AI Code Generators”. Bloomberg Law.
[7]: “A Guide to the Circular Deals Underpinning the AI Boom”. Bloomberg.
[8]: “AI memory is sold out, causing an unprecedented surge in prices”. CNBC.
[9]: “RAM: WTF?” GamersNexus.
[10]: “Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations”. Gian Marco Orlando, Jinyi Ye, Valerio La Gatta, Mahdi Saeedi, Vincenzo Moscato, Emilio Ferrara, Luca Luceri.
[11]: “Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud”. FBI PSA.
[12]: “Report: ICE Using Palantir Tool That Feeds On Medicaid Data”. Electronic Frontier Foundation.
[13]: Unless the apocalypse happens.
[14]: Report and original podcast
[15]: Scientists hide messages in papers to game AI peer review
[16]: Phishing For Gemini
[17]: “NVIDIA’s New GPU Driver is a Disaster & It Has Now Been Pulled Back; Did We Just See the First ‘Vibe-Coded’ Release?” WCCF TECH.
[18]: “Microsoft finally admits almost all major Windows 11 core features are broken”. Neowin.
[19]: “After outages, Amazon to make senior engineers sign off on AI-assisted changes”. Ars Technica.
[20]: “People who use chatbots for news consider them unbiased and “good enough,” new study finds”. NiemanLab.
[21]: “Most Americans use AI but still don’t trust it”. YouGov.