One of the prerequisites to the DeepMind acquisition was that Google would create an ethics board to monitor its internal AI developments.
One question we get asked increasingly is whether people should be scared of how far AI can advance.
AI and technology is changing the world as we know it. The opportunities are endless and the rate in which technology is developing is incredibly exciting.
The rate of these developments also poses questions on how we can control and monitor AI, how we can ensure it's safe and how we can stop the wrong people using AI for the wrong reasons. To ignore these questions would not be wise.
Whilst the acquisition of DeepMind led to Google creating an AI ethics panel this certainly does not mean other businesses will be as conscientious.
Elon Musk, Stephen Hawking and Steve Wozniak are just some of the people who have warned of the impact that AI could have. They've warned that AI could potentially be more dangerous than nuclear weapons if it is not monitored and controlled correctly.
Whilst some may scoff and say that these notions are some way off they should be listened to. After all it wasn't that long ago that people would have said it would be impossible for Deep Blue to beat a world chess champion, for Watson to win Jeopardy! or for DeepMind to win Go.
Google has pledged to set up an ethics board to monitor its internal AI developments. Interestingly, this was one of DeepMind's prerequisites to signing the acquisition papers, suggesting that Suleyman knows AI has potential to do harm.