No AI Is Used in this Blog (Except for This)
Artificial Intelligence's [mis]use for processing and conveying information
TECHNOLOGY
Daniel Donnelly
8/3/20254 min read


In April 2023, OpenAI released its program Chat GPT-4, which was a major breakthrough for consumer-oriented interfaces of artificial intelligence. Suddenly the troves of information stored in cyberspace about a given subject could be meaningfully digested and rendered into a textual output valuable to the user. OpenAI’s congeneric program of Dall-E would similarly process troves of visual data to produce imagery based on a user’s instructions in plain English. From there, it was a short leap to arrive at Sora, OpenAI’s generator of video based on user prompts.
Seemingly overnight, competition flooded the market for consumer-based artificial intelligence. Every major software company was hawking its own programs of AI, such as Grok by X, Microsoft’s Copilot and Google’s Gemini. These programs were “trained” on the input of unfathomably vast amounts of data siphoned from the software companies’ various servers. Thus, were a user to ask something like, “What would have been the cover art for an album by Prince released in the year 2024, had he lived so long?”, AI could query Prince’s full discography, survey all artwork related to the artist and his albums, to produce the hypothesis seen above (which I confess doubled me over in laughter, remembering the Dave Chapelle Show’s skit, “Shirts versus Blouses”!).
As this competition raged between the software companies, what became apparent was that the AI models were learning not just from the information siphoned from the companies’ proprietary databases, but from the respective (if not exclusive) user bases. That is, the AI models were learning from the user queries submitted to them, in much the same way as anyone learns anything from oft-repeated practice. Thus, the models’ training was bidirectional from the engineers’ side, and from that of the end consumers.
To improve AI interfaces via practice with user data, many software companies now offer AI to users in some form or another. The Mozilla web browser’s AI plug-in of Voilà offers to editorialize web pages so that you need not personally view them. Microsoft in turn embeds Copilot into every product by Microsoft, such that Copilot offers to summarize your e-mails in Hotmail, and it offers to write your documents in Microsoft Word. In short, the steps of input whereby information gets to the researcher, and that of output whereby he analyzes, digests and interprets said information, can be automated. Even the end consumer need not trouble himself to read the printed research – or watch it presented in a documentary – since AI can tell him all he wants to know about it.
But of course, any summary or interpretation necessarily involves curation. These are choices about what to exclude, what to emphasize, what comparisons and parallels to make. The end consumer may request “all he wants to know about” a given collection of information, yet by AI’s nature, it gives him what the underlying algorithm thinks he should know about it.
For this reason, personally I choose not to utilize AI in my research and presentation. No judgments against anyone who does.
Six years ago, I started blogging on sociopolitical themes. As different situations, policies and personages came to my attention, I yearned to understand them properly. Just as any good teacher will tell you that you learn the most about a subject the moment you are responsible for teaching it to others, by way of blogging, methodically I examined these subjects. Whereas initially they may have provoked in me a sense of approbation, apprehension, reminiscence or consternation, through analysis I would determine whether these responses were based on solid evidence, or more the product of internal dissonance. I picked up the concepts, flipped them around like Tetris pieces, saw how they fit into one another. I zoomed out to see the bigger picture, or maybe zoomed in for a mouse-eye’s view of the scenario.
Admittedly, it can be tedious. I pore through books, PDFs and websites to research the topics, then spend time writing about them in a way which hopefully entertains my readership. Yet I would have it no other way. The process may be laborious on my end, but it has proved invaluable in helping me to understand these issues, and my hope is that it leaves my readership memorably better informed about them as well.
This is no defense of Ludditism. There is no squeezing AI back into the toothpaste tube; it is here to stay and will only grow henceforwards. AI is revolutionizing rote intellectual work of all sorts and making society more productive in ways we are just beginning to understand.
This is just some warranted caution when trusting AI both to condense the information you receive, and the information you present to others. The stitch in time may cost you in ways beyond your own mental development; AI may rob you of the chance to arrive at your own conclusions, and by extension, it will adulterate your influence over the conclusions which others form about these topics.
For the creators themselves – writers, musicians, visual artists – overreliance on AI may be counterproductive in ways which should be fairly obvious. Considering that AI is based on anonymizing and aggregating intellectual property, utilization of AI indirectly rewards the piracy of creators’ content. As for AI’s originality – that quality which distinguishes any creator – it may be an impractically long time before AI can come up with content which is not (hilariously) cliqué like the attached album artwork.