New York (EFE) regulate it, for which reason he advocated a regulation that starts from the companies themselves.
The companies that generate the AI must “regulate themselves” beyond the rules that the states establish, he said in an interview with the ABC network published this Tuesday.
“(State) regulation will have a hard time keeping up with the progress of this technology because it is moving so fast,” so “companies have to apply their own ethical decisions.”
Cook noted that large language models (LLMs) – which is behind chatbots like OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard – are very “promising”, but can be double-edged.
“I think it’s very important to be very determined and thoughtful in the development and implementation of these (LLMs) because being so powerful one worries about things like misinformation,” the executive explained on ABC’s “Good Morning America” program. .
The advancement of AI captures the world’s attention
Last week, a group of more than 300 industry experts and researchers, including Google DeepMind CEO Demis Hassabis; Anthropic CEO Dario Amodei and OpenAI CEO Sam Altman, among others, warned in a letter that AI poses an “extinction risk” comparable to pandemics or nuclear war.
Cook said in the interview that he uses ChatGPT himself, and that this technology is something his company “is looking at closely.”
Regarding AI in general, the main representative of the apple company said that it is a technology that is already integrated into its products, but that when people use them they do not think “of it as AI”.
This week Apple celebrates its Worldwide Developers Conference (WWDC, in English) that takes place at its headquarters in Cupertino (USA) and yesterday the company presented several products, of which the most prominent was Apple Vision Pro, a set of mixed reality glasses and headsets with which users can experience virtual reality and augmented reality experiences.