Top AI researcher says AI will end humanity and we should stop developing it now — but don’t worry, Elon Musk disagrees



Here’s something cheery to consider the next time you use an AI tool. Most people involved in artificial intelligence think it could end humanity. That’s the bad news. The good news is that the odds of it happening vary wildly depending on who you listen to.

p(doom) is the “probability of doom” or the chances that AI takes over the planet or does something to destroy us, such as create a biological weapon or start a nuclear war. At the cheeriest end of the p(doom) scale, Yann LeCun, one of the “three godfathers of AI”, who currently works at Meta, places the chances at <0.01%, or less likely than an asteroid wiping us out.



Source Article Link

Leave a Comment