Generative AI models like ChatGPT are so shockingly good that some now claim that AIs are not only equals of humans but often smarter. They toss off beautiful artwork in a dizzying array of styles. They churn out texts full of rich details, ideas, and knowledge. The generated artifacts are so varied, so seemingly unique, that it’s hard to believe they came from a machine. We’re just beginning to discover everything that generative AI can do.
Some observers like to think these new AIs have finally crossed the threshold of the Turing test. Others believe the threshold has not been gently passed but blown to bits. This art is so good that, surely, another batch of humans is already headed for the unemployment line.
But once the sense of wonder fades, so does the raw star power of generative AI. Some observers have made a sport of asking questions in just the right way so that the intelligent machines spit out something inane or incorrect. Some deploy the old logic bombs popular in grade-school art class—such as asking for a picture of the sun at night or a polar bear in a snowstorm. Others produce strange requests that showcase the limits of AI’s context awareness, also known as common sense. Those so inclined can count the ways that generative AI fails.
Here are 10 downsides and defects of generative AI. This list may read like sour grapes—the jealous scribbling of a writer who stands to lose work if the machines are allowed to take over. Call me a tiny human rooting for team human—hoping that John Henry will keep beating the steam drill. But, shouldn’t we all be just a little bit worried?
When generative AI models like DALL-E and ChatGPT create, they’re really just making new patterns from the millions of examples in their training set. The results are a cut-and-paste synthesis drawn from various sources—also known, when humans do it, as plagiarism.
Sure, humans learn by imitation, too, but in some cases, the borrowing is so obvious that it would tip off a grade-school teacher. Such AI-generated content consists of large blocks of text that are presented more or less verbatim. Sometimes, however, there is enough blending or synthesis involved that even a panel of college professors might have trouble detecting the source. Either way, what’s missing is uniqueness. For all their shine, these machines are not capable of producing anything truly new.
While plagiarism is largely an issue for schools, copyright law applies to the marketplace. When one human pinches from another’s work, they risk being taken to a court that could impose millions of dollars in fines. But what about AIs? Do the same rules apply to them?
Copyright law is a complicated subject, and the legal status of generative AI will take years to settle. But remember this: when AIs start producing work that looks good enough to put humans on the employment line, some of those humans will surely spend their new spare time filing lawsuits.
Plagiarism and copyright are not the only legal issues raised by generative AI. Lawyers are already dreaming up new ethical issues for litigation. As an example, should a company that makes a drawing program be able to collect data about the human user’s drawing behavior, then use the data for AI training purposes? Should humans be compensated for such use of creative labor? Much of the success of the current generation of AIs stems from access to data. So, what happens when the people generating the data want a slice of the action? What is fair? What will be considered legal?
Information is not knowledge
AIs are particularly good at mimicking the kind of intelligence that takes years to develop in humans. When a human scholar is able to introduce an obscure 17th-century artist or write new music in an almost forgotten renaissance tonal structure, we have good reason to be impressed. We know it took years of study to develop that depth of knowledge. When an AI does these same things with only a few months of training, the results can be dazzlingly precise and correct, but something is missing.
If a well-trained machine can find the right old receipt in a digital shoebox filled with billions of records, it can also learn everything there is to know about a poet like Aphra Behn. You might even believe that machines were made to decode the meaning of Mayan hieroglyphics. AIs may appear to imitate the playful and unpredictable side of human creativity, but they can’t really pull it off. Unpredictability, meanwhile, is what drives creative innovation. Industries like fashion are not only addicted to change but defined by it. In truth, artificial intelligence has its place, and so does good old hard-earned human intelligence.
Speaking of intelligence, AIs are inherently mechanical and rule-based. Once an AI plows through a set of training data, it creates a model, and that model doesn’t really change. Some engineers and data scientists imagine gradually retraining AI models over time, so that the machines can learn to adapt. But, for the most part, the idea is to create a complex set of neurons that encode certain knowledge in a fixed form. Constancy has its place and may work for certain industries. The danger with AI is that it will be forever stuck in the zeitgeist of its training data. What happens when we humans become so dependent on generative AI that we can no longer produce new material for training models?
Privacy and security
The training data for AIs needs to come from somewhere and we’re not always so sure what gets stuck inside the neural networks. What if AIs leak personal information from their training data? To make matters worse, locking down AIs is much harder because they’re designed to be so flexible. A relational database can limit access to a particular table with personal information. An AI, though, can be queried in dozens of different ways. Attackers will quickly learn how to ask the right questions, in the right way, to get at the sensitive data they want. As an example, say the latitude and longitude of a particular asset are locked down. A clever attacker might ask for the exact moment the sun rises over several weeks at that location. A dutiful AI will try to answer. Teaching an AI to protect private data is something we don’t yet understand.
Even the earliest mainframe programmers understood the core of the problem with computers when they coined the acronym GIGO or “garbage in, garbage out.” Many of the problems with AIs come from poor training data. If the data set is inaccurate or biased, the results will reflect it.
The hardware at the core of generative AI might be as logic-driven as Spock, but the humans who build and train the machines are not. Prejudicial opinions and partisanship have been shown to find their way into AI models. Perhaps someone used biased data to create the model. Perhaps they added overrides to prevent the model from answering particular hot-button questions. Perhaps they put in hardwired answers, which then become challenging to detect. Humans have found many ways to ensure that AIs are excellent vehicles for our noxious beliefs.
It’s easy to forgive AI models for making mistakes because they do so many other things well. It’s just that many of the mistakes are hard to anticipate because AIs think differently than humans do. For instance, many users of text-to-image functions have found that AIs get rather simple things wrong, like counting. Humans pick up basic arithmetic early in grade school and then we use this skill in a wide variety of ways. Ask a 10-year-old to sketch an octopus and the kid will almost certainly make sure it has eight legs. The current versions of AIs tend to flounder when it comes to the abstract and contextual uses of math. This could easily change if model builders devote some attention to the lapse, but there will be others. Machine intelligence is different from human intelligence and that means machine stupidity will be different, too.
Sometimes without realizing it, we humans tend to fill the gaps in AI intelligence. We fill in missing information or interpolate answers. If the AI tells us that Henry VIII was the king who killed his wives, we don’t question it because we don’t know that history ourselves. We just assume the AI is correct, in the same way we do when a charismatic presenter waves their hands. If a claim is made with confidence, the human mind tends to accept it as true and correct.
The trickiest problem for users of generative AI is knowing when the AI is wrong. Machines can’t lie the way that humans can, but that makes them even more dangerous. They can produce paragraphs of perfectly accurate data, then veer off into speculation, or even outright slander, without anyone knowing it’s happened. Used car dealers or poker players tend to know when they are fudging, and most have a tell that exposes their calumny; AIs don’t.
Digital content is infinitely reproducible, which has already strained many of the economic models built around scarcity. Generative AIs are going to break those models even more. Generative AI will put some writers and artists out of work; it also upends many of the economic rules we all live by. Will ad-supported content work when both the ads and the content can be recombined and regenerated without end? Will the free portion of the internet descend into a world of bots clicking on ads on web pages, all crafted and infinitely reproducible by generative AIs?
Such easy abundance could undermine all corners of the economy. Will people continue to pay for non-fungible tokens if they can be copied forever? If making art is so easy, will it still be respected? Will it still be special? Will anyone care if it’s not special? Might everything lose value when it’s all taken for granted? Was this what Shakespeare meant when he spoke about the slings and arrows of outrageous fortune? Let’s not try to answer it ourselves. Let’s just ask a generative AI for an answer that will be funny, odd, and ultimately mysteriously trapped in some netherworld between right and wrong.
Copyright © 2023 IDG Communications, Inc.