The Future. ChatGPT is reaching new advanced capabilities as a number of headlines show that its “intelligence” is becoming more persuasive and humanlike. As the technology rolls out to consumers, a legal framework may be developed that protects against the addictiveness of taking AI’s advice since giving up privacy is a function of using a personalized chatbot.
Program pressureChatGPT may soon join your local debate team.
OpenAI CEO Sam Altmanjoined Thrive Capital in launching the Arianna Huffington-led Thrive AI Health, which will develop a personalized chatbot coach to provide “nudges and recommendations” to improve users’ health.
OpenAI formed a safety team called “Preparedness,” which is focused on analyzing AI’s “persuasion” skills, especially as conversational LLMs are designed to use words and phrases that they find will be more compelling to humans.
And just last week, OpenAI announced that its system is allegedly on the verge of reaching Level 2 of a five-level track toward creating AI that can outperform humans. Level 2 is called “Reasoners,” meaning it can “do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.”
John Burden, a research fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, said the fear surrounding reasoning AI is that systems “would be able to come to conclusions that we don’t like.”
In other words, now you’ll have to rally the energy to argue with humans and robots.
The post OpenAI wants chatbots to persuade you appeared first on TheFutureParty.