Jthose who, like this columnist, spend too much time online will have noticed something of a feeding frenzy over the past two weeks. The cause is the release of an interesting chatbot – a software application capable of conducting an online conversation. The particular bot creating the fuss is ChatGPTan artificial intelligence (AI) chatbot prototype that focuses on usability and dialogue and has been developed by OpenAIan artificial intelligence research lab based in San Francisco.
ChatGPT uses a large language model built through machine learning methods and is based on OpenAI’s GPT-3 model, which is capable of outputting human-like text when given a natural language prompt. It is an example of what is known today as “generative AI”: software that uses machine learning algorithms to allow machines to generate artificial content – text, images, audio and video content. based on its training data – in a way that could persuade a human user into believing that its outputs are “real”.
ChatGPT has become very popular because it is easy to access and use: it can be run in a browser. All you have to do is sign up for a free account with OpenAI, then give the program a task describing what you want it to do in plain language. For example, you can ask him (as I did), “Is Donald Trump really a narcissist?” narcissistic personality disorder. Some argue that his behavior and statements fit the diagnostic criteria for the disorder, while others believe that his behavior is better explained by other psychological factors.
Obviously it’s not exactly deep, but at least it’s grammatical. He also strives for an almost authoritarian style, which should sound the alarm; authoritative misinformation can have more of a hold on ordinary people than the usual sinkhole. But people seem to like the new bot. Even the Daily Mail is impressed. “The release of the artificial intelligence chatbot,” he said, “led to speculation that it could replace Google’s search engine within two years… Its ability to answer questions complexity has led some to wonder if it could challenge Google’s search engine monopoly.”
ChatGPT is the latest installment in a long-running debate about digital technology. Is it something that increase human capacities? (Like spreadsheets or a Google search, for example.) Or is it a technology that ultimately aims to replace humans?
Because these generative AI systems are significantly better than previous technologies at producing grammatical text, many people are unduly impressed with them – to the point that a few poor souls have even begun to wonder if the machines are sentient. What’s interesting about ChatGPT, however, is that it surprised some of the skeptics who tried it. A prominent economist, Brad DeLong, for example, Requirement to “write 500 words to tell me what [Neal] by Stephenson Illustrated alphabet of a young woman would report to its reader the rise of neofascism and Trumpism in the 2010s” – and in return got a plausible little essay inspired by Stephenson’s 1995 science fiction novel, The Diamond Age: or the Illustrated Primer of a Young Woman.
The most important question raised by the bot is whether it will change the assumptions people make when thinking about the impact of AI on jobs. Conventional wisdom is that the type of tasks most at risk from automation are those that are procedural, rule-based, and regular. In this context, one of the most interesting experiences with ChatGPT has been led by a business school professor, Ethan Mollick, who asked him to do some of the essential work he does. For example: “Create a curriculum for a 12-session MBA-level introductory entrepreneurship course and offer the first four sessions. For each, include readings and assignments, as well as a summary of what will be covered. Include class policies at the end.
The results surprised and impressed him. The bot produced “a perfectly fine curriculum for an introductory MBA class [masters of business administration]. The readings are reasonably modern (although they don’t give page numbers, among other errors), and they actually have a reasonable structure for a final draft. The experience prompted some sober reflections. “Rather than automating repetitive and dangerous tasks,” Mollick said, “there is now the prospect that early AI-disrupted jobs will be more analytical, creative, and involve more writing and communication.”
It will be interesting to see how this plays out. Naturally, before embarking on this essay, I asked the bot to “write an 850-word John Naughton-style column on whether generative AI tools augment or replace human capabilities.” The result turned out to be so impeccably bland that it could only have been written by a machine that had been trained on the output of the German-language Swiss newspaper. Neue Zürcher Zeitung a holiday. Phew! We chroniclers live to fight another day.
what i read
If you’re not on Instagram and suffer from Fomo (fear of missing out), relax. Kate Lindsay has good news for you in her Atlantic characteristic Instagram is over.
Use it or lose it – Semiconductor version is Diane Coyle’s review of Chris Miller’s book Chip War: the fight for the most critical technology in the world on his Enlightened Economist site, on the geopolitics of silicon chips.
Computer scientist Paul Graham’s thoughtful essay Heresyaddressing the concept in the 2020s, can be found on its eponymous site.