‘AI brings a great responsibility’

- Advertisement -spot_imgspot_img
‘AI brings a great responsibility’

Google has waited a long time to publicly release its own chatbot powered by artificial intelligence (AI). According to director Martijn Bertisen of Google Netherlands, this has to do with the responsibility that the development of AI entails. “You should not only exploit the advantages, but also understand the disadvantages and limit them where possible, because AI also entails risks.”

Earlier this week, Google made its own AI chatbot ‘Bard’ available to users in the United States and the United Kingdom. (ANP/REX by Shutterstock)

Earlier this week, Google made its own AI chatbot ‘Bard’ available to users in the United States and the United Kingdom. Microsoft already came up with chatbot ‘ChatGPT’, but according to Bertisen you don’t always have to be the first to be the best. ‘Before Google came up with a search engine, a number of them already existed. But we just did it differently.’

Missed boat

The director of Google Netherlands says that the company has been working with artificial intelligence for much longer. ‘I read that we as Google have missed the boat in the field of AI, but we have actually partly built the boat. The T in ChatGPT stands for Transform and that was a research paper published by Google.’ For example, many of the services offered by Google – such as the search engine, YouTube or Google Maps – already use AI to create a better user experience.

‘It is important that you use the advantages of AI, but also understand the disadvantages’

Martijn Bertisen, director of Google Netherlands

The fact that Google has waited longer to launch new products that are largely powered by AI has to do with responsibility, according to Bertisen. “We reach billions of users worldwide every day, so it is important that you not only use the advantages of AI, but also understand and possibly limit the disadvantages.”

Firework

Google CEO Sundar Pichai once compared AI to fire or electricity, Bertisen knows. ‘It has meant a lot of good for humanity, but we had to learn how to deal with it in the first place.’ That’s why Google has chosen to slow down with the introduction of ‘Bard’, the AI ​​chatbot.

‘We first tested it internally with 80,000 employees and then externally with 10,000 experts. We still know that things go wrong and we have to learn from that.’ Bertisen states that it is difficult for a company to find a balance between an ambitious but at the same time responsible use of technology.

Weapons

In 2018, Google established a number of principles to ensure that it is responsible in the field of AI. ‘For example, it is important that AI does not create stereotypes or discriminate. In addition, the privacy of the users must be guaranteed and the data must be respected. We develop technology based on those principles.’

“Google does not want to contribute to AI that can be used for weapons development”

Martijn Bertisen, director of Google Netherlands

Google is cautious about developing AI for facial recognition, because it could work into a surveillance society. In addition, according to Bertisen, Google does not want to contribute to AI that can be used for the development of weapons. ‘That also has to do with responsibility, you shouldn’t make technology available that people can use to develop bad things.’


- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img