On Duolingo, AI, and Humanity
I've been thinking a lot about how ironic and sad it is that Duolingo fired a bunch of its human translators in favor of AI. Duolingo, a company whose goal is, ostensibly, to help human beings learn to communicate better, to achieve greater understanding through language learning, has decided, fuck it, let's just do it with computers!
There's an assumption pushed by profit seeking companies that AI is to knowledge work what machines were to manual work during the industrial revolution. Hand-sewing was made largely obsolete by sewing machines, and this is viewed as a net win for humanity. Cheaper goods, faster.
However, for one thing, manual work replaced by machines tends to be highly restricted and repetitive. We see really good narrow applications of machine learning for stuff like this (like identifying cancer). And for another, machines have not replaced all sewing. Humans still need to do finishing work, fine detail work, fixes, tailoring, etc. You can't just throw a sack of fabric pieces into the open bay doors of a garment factory, push a few buttons, and get beautiful finished clothes out at the end. If you were a huckster trying to get a bunch of venture capital and promised this, people would rightfully laugh you out of the shark tank. (Foreshadowing!)
Even if (BIG IF) this is the level of sophistication and time saving AI (or any sort of nonhuman automation) can achieve, instead of reducing the workforce, we should be using automation to reduce hours needed to produce the same output and give humans more leisure time. We already struggle with this with existing automation. Garment workers are paid shit. Laborers everywhere are struggling and are being replaced with machines. Workers must own their own labor, must unionize, and form coops to wrest control from the owning class.
But that's not even what I'm talking about today. The sanest voices in the room tend to view AI as something like a sewing machine: a labor-saving device for broad-strokes preliminary work that is refined and improved by humans. But the con men and executives in charge definitely think "human-level intelligence" is just around the corner for computers, and soon enough they will be typing "give me a script for a summer blockbuster, then use Chris Evans's AI voice and body to act in it" into a smartphone and get their first check by dinner.
I want to talk about the philosophy of human intelligence. I'm going to argue that human level intelligence is by definition impossible for a machine to achieve, and using whatever "artificial" intelligence looks like to replace the most human activities, the humanities, is something only the socially novocained Silicon Valley set could think would work.
What makes a human brain know what it knows? (I mean, we don't entirely know). At a basic level, we can say the brain absorbs information from input and uses it to generate responses. That does sound a lot like what a computer does, doesn't it? Except...a human gathers that information through the medium of the human body. The human body, with all its sometimes bizarre, meatwad responses to external stimuli, the human body that evolved in real space with plants and air and water and scary bugs and disease, the human body that craves closeness and warmth and love and connection. everything we do today as "horny electric meat bags" is arguably because it makes sense at some level for a human living 10,000-200,,000 years ago in a tightly knit community of hunter gatherers whose greatest asset was its adaptability to varying conditions.
By definition, a machine can never have a human body as its interpretive mechanism, so how can we ever say that the responses it generates are humanlike?
One could be cliché and say an LLM that writes a love poem will always be inauthentic, because a computer does not have a heart whose pulse accelerates, whose pupils dilate, in the presence of another person. How could it know what love is or presume to create art about it? A poem jumbled together from humans' poetry can only ever ape that process. The argument is that the output is indistinguishable from what a human would have created, but even if that were true, the point of something like poetry is to feel connection to another person. What is even the point of reading a computer generated poem other than a vague sense of novelty (and of course, profits for Amazon, although I suspect even they know people find LLM-generated writing emotionally bankrupt, which is why they go to great lengths to hide that AI generates books on their platform).
If the ultimate goal is to create "artificial intelligence" that is in any way humanlike, I think we are going to have to eventually talk about recreating a body that can take in environmental input. I don't think this is impossible, but it does seem inadvisable. Just personally, it seems like the height of hubris to believe we could design a mechanical body that could compete with billions of years of evolution, that could mimic everything we could do or improve upon it in the complex way that we operate. And if we could do that, shouldn't we be using it to improve prosthetic limbs or generate healthy organs, instead of pumping a bunch of carbon into the atmosphere to make a robot that says "Hi Brian, your jokes are sooooo funny, tee-hee! Have you read my latest love poem about you?"
But let's be less touchy feely. Computer scientists have been trying to mimic the human capacity for language since the invention of computer science. (See here for a history of chatbots). The goal was always to have a computer be able to talk like a person. Unfortunately, what the people heads down in the muck trying to bully electrons into doing their bidding always forget is that language evolved as a social project. Language devoid of social context is fundamentally incomplete.
If I ask ChatGPT, what's wrong with this French sentence?
Je suis beau.
It can't possibly find anything wrong with it. It's grammatically correct. But I am a woman. Any two-year-old French speaker would recognize that it should be je suis belle. A computer doesn't know about my gender. That is context that would have to be explicitly fed to an LLM through text ("What's wrong with this statement if a woman said it?") that a human body would gain through cues like voice, appearance, etc.
We underestimate and we do not respect the amount of personal, historical, environmental, and social context that is implicit to even the simplest human conversations. This is sort of what is called "social deixis" in linguistics. The relationship between the interlocutors is foundational to basic meaning and communication. For example, in Japanese, you use a different verb for "to give" depending on whether you are giving the object to someone in your in-group or out-group. And that in-group or out-group can also change depending on context, even within the same conversation.
Let's say you're talking to your boss about giving flowers to an important client outside the organization at an upcoming event. You'd use a verb denoting that the client was in the out-group. Maybe then you say you really like the florist, because you got some flowers for your brother there recently. There's a different "give" verb, because your brother is in your in-group, from your perspective as speaker.
These concepts are difficult enough for non-native human speakers to grasp. Do you really think a computer will be able to produce the appropriate verb, out of context, even with unfathomable amounts of training sentences? And that's just one relatively simple example. A computer cannot learn what we do not teach it, and no one has even begun to try to teach computers about human social interaction (because, again, Silicon Valley dudes DON'T THINK IT'S IMPORTANT).
I imagine the kind of people who google basic social questions turning to ChatGPT in even more pathetic scenes of modern alienation. "How do I tell my college professor that I am sorry to hear of her husband's death, and does it matter if I haven't spoken to her in 15 years?" It's pretty telling about where we are today socially that most people would rather ask a computer not only basic questions of fact, but deeply personal and contextually-bound questions about human relations.
The fundamental error in what Duolingo is doing is that we tend to think of questions about language as questions of fact, when how we should think of them is as social questions. One thing I have always loved about language learning is that there are many correct answers, and also many very wrong answers.
How do I say "how's the weather?" in Japanese? Well, ma'am, I can give you a generic answer, but for a good answer, I'm going to need a little more information about what's going on in this scene. You know, like you'd need for a good answer to anything. We're being made to accept mediocre answers to good questions by people who want everything fast and cheap so they can line their pockets. And I really, really fucking hate that.
The things we desperately need are going to have to come from each other.
back to garden