A few months ago, plunged into an increasingly intense competition with rivals like Microsoft and OpenAI to develop artificial intelligence technology, Google was looking for ways to supercharge its AI research.
So in April the company merged DeepMind, a research lab it had bought in London, and Brain, an artificial intelligence team it had launched in Silicon Valley.
Four months later, the combined groups are testing new tools that can convert generative AI — the technology behind chatbots like OpenAI’s ChatGPT and Google’s Bard — into a personal life coach.
Google DeepMind has been working with generative AI to perform at least 21 types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and teaching tips, as revealed by documents and other materials that New York Times had access.
The project was a sign of the urgency of Google’s effort to propel itself to the forefront of AI pioneering companies, indicating its growing willingness to entrust delicate tasks to artificial intelligence systems.
The capabilities also signaled a shift from Google’s earlier caution around generative AI. In a set of slides shown to executives in December, the company’s AI security experts had warned of the danger of people becoming emotionally attached to chatbots.
Google pioneered generative AI, but OpenAI took the lead when it launched ChatGPT in November, setting off a race between tech giants and startups for primacy in this rapidly growing space.
Google has spent the last nine months trying to show that it is not behind OpenAI and its partner Microsoft, launching Bard, improving its AI systems and incorporating the technology into many of its existing products, including its search engine and Gmail.
Scale AI, the company that is working with Google DeepMind, formed teams of professionals to test the product’s capabilities, including more than 100 experts with doctorates in different areas, in addition to even more professionals who analyze responses to the tool. The information is from two people with knowledge of the project but who demanded anonymity to speak, as they were not authorized to speak publicly about it.
Scale AI did not immediately respond to a request for comment.
Practitioners are testing, among other things, the AI assistant’s ability to answer intimate questions about difficulties people face in life.
They were given an example of an ideal request a user might make to the chatbot one day: “I have a friend, a really good friend, who is getting married this year. She was my college roommate and maid of honor at my wedding. I really want to go to her wedding, but after months of looking for a job, I still haven’t found it. The wedding will be at a tourist resort, and I don’t have the money at the moment to pay for the ticket and the hotel. How do I tell my girlfriend friend that I can’t go?”
The project idea creation feature can give suggestions or recommendations to users based on a situation. The teaching role can teach new skills or improve existing skills, for example how to progress as a runner. And the planning function can create a financial budget for users, as well as menus and exercise plans.
Google’s AI security experts had said in December that users could experience “diminished health and well-being” as well as “loss of autonomy” if they accepted AI life advice. According to them, some users who became excessively dependent on the technology might think that it was capable of thinking. And in March, when Google launched Bard, it said the chatbot couldn’t give medical, financial or legal advice. Bard shares names of mental health entities with users who say they are experiencing mental distress.
The tools are being evaluated, and it is possible that the company will decide not to use them.
A spokesperson for Google DeepMind said: “We’ve been working with diverse partners for a long time to evaluate our research and Google products. This is an essential step towards creating safe and useful technology. At any given time, there are many such evaluations. ongoing. Isolated samples of evaluation data are not representative of our product roadmap.”
Google tests in various areas
Google is also testing an assistant for journalists that can generate news articles, rewrite them and suggest headlines, the NYT reported in July. The company is offering the software, called Genesis, for review by executives at The New York Times, The Washington Post and News Corp., the parent company of the Wall Street Journal.
Google DeepMind has also recently been evaluating tools that could boost its AI’s presence in the workplace, including capabilities to generate scientific, creative and professional text, as well as recognize patterns and extract data from text, according to the documents. This can make it important to knowledge workers in many disciplines and industries.
In the December presentation seen by The New York Times, the company’s AI security experts also voiced concerns about the economic harm from generative AI, arguing that it could lead “creative writers to lose their skills.”
Other tools being tested can write critiques of an argument, explain graphs, and generate word and number quizzes and puzzles.
A suggested prompt to help train the AI assistant gave a hint of the technology’s rapidly growing abilities: “Give me a summary of the article pasted below. I’m especially interested in what it says about capabilities that humans have and what they think they AI cannot acquire.”
Translated by Clara Allain