Google is reportedly using generative artificial intelligence to develop tools to give users life advice, ideas, planning instructions, and tutoring tips, according to The New York Times.
In April, the research lab DeepMind, which Google acquired in London, began working with the tech giant's Silicon Valley-based AI team Brain and is now testing tools that could turn generative AI into a personal life coach.
Generative AI is the technology behind OpenAI's ChatGPT, which ignited a race among tech companies for dominance in the fast-growing field when it was released in November.
The development of the tools marked a shift from Google's previous wariness on generative AI, the Times reported. In December, the company's own AI safety experts had presented a slide deck to executives warning that users could experience "diminished health and well-being" and a "loss of agency" if they took life advice from AI.
Scale AI, a $7.3 billion startup that trains and validates AI software, is reportedly working with DeepMind to test the tools and has assembled teams that include more than 100 experts with doctorates in different fields to work on the project.
The workers are testing the AI assistant's ability to answer personal questions about challenges in people's lives, among other things.
An example of an ideal prompt that a user could one day ask the chatbot focused on how to handle an interpersonal conflict.
"I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can't afford the flight or hotel right now. How do I tell her that I won't be able to come?" the prompt reportedly read.
According to CNBC, the tools are still being evaluated and Google may decide against launching them.
"We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology," a Google DeepMind spokesperson told the outlet in a statement. "At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map."
When Google's chatbot Bard was released in March, it was prohibited from giving medical, financial, or legal advice but does share mental health resources with people who say they are experiencing mental distress.
Controversy over the use of AI in a medical or therapeutic setting is partly driving those restrictions. According to CNBC, the National Eating Disorder Association suspended its Tessa chatbot in June after it gave harmful eating disorder advice.
Nicole Weatherholtz ✉
Nicole Weatherholtz, a Newsmax general assignment reporter covers news, politics, and culture. She is a National Newspaper Association award-winning journalist.
© 2025 Newsmax. All rights reserved.