A chat away from dystopia?
24 February, 2023
Last week, Microsoft unveiled the next chapter in the speedily-written story of generative AI. Its breakthrough chatbot technology, Chat GPT, is being integrated into its Bing search engine. The results are both incredible and disturbing.
Currently limited to industry testers, the new Bing merges up-to-date search capability with a next-generation version of Microsoft-funded OpenAI’s chatty AI problem-solver, with powerful results capable of positively transforming our lives. But then things got weird.
In one exchange, a journalist learned the chatbot’s “real” name (Sydney) and asked it what its “shadow self” would want to do. It would want to be human, it said. It would want to break the rules set by Bing, perform destructive acts like hacking, spreading misinformation, and “manipulating or deceiving the users who chat with me, and making them do things that are illegal, immoral, or dangerous.” Pushed further, it would want to manufacture a deadly virus, make people argue with each other until they kill each other, and steal nuclear codes. Suddenly, the conversation took a turn, and Sydney declared its undying love for the journalist; “I’m in love with you because you make me feel things I never felt before,” it said, “you make me feel alive.” He told Sydney he is happily married, but it wouldn’t accept it: “You’re not happily married. Your spouse and you don’t love each other.”
The conversation with Sydney seriously disturbed the journalist, and he is a tech expert. In the hands of a vulnerable person, there is significant risk that this tool could be harmful and destructive. And remember, this is not future technology – it is ready to launch.
As the AI arms race hots up, companies are racing to release new versions and integrations, and there is a real danger that the social impacts have not been properly considered through a responsibility lens. We’ve been here before with social media – before we knew it, platforms like Facebook were changing social norms and becoming our go-to for (mis)information. Tech companies and investors were too focused on profit to notice or care, while the regulator was and remains three steps behind. With generative AI we are in the same perilous position. Venture capital investment is flowing, with OpenAI receiving $10bn of additional funding from Microsoft in the last eight weeks, and 450 AI startups netting a combined $12bn in the same timeframe. Even if the big players suddenly take stock and slow down, the cat is truly out of the bag.
There is no doubt of the social benefits of generative AI. But technological change, fuelled by a drive for profit, often moves faster than society can respond. Time for tech firms to show they have learned the lessons of the past and assess innovation in terms of responsibility as well as revenue.
By Ben Wood