Recently, in an interview, Nobel laureate and AI pioneer Geoffrey Hinton mentioned: "Some AI researchers, like Yann LeCun, who is my friend and was my postdoc, say that AI replacing humans is absolutely impossible and there's nothing to worry about. Don't believe him. We have no idea what will happen when AI becomes smarter than us. This is completely uncharted territory.
There are other AI researchers, like Eliezer Yudkowsky, who isn't really an AI researcher but knows a lot about AI. He says there's a 99% chance they will replace us. He even says 99.9%, and the correct strategy is to bomb data centers now, which is not a popular view in big companies. That's just crazy.
When we deal with things smarter than us, we find ourselves entering an era of great uncertainty, and we don't know what will happen. We are creating them, so we have a lot of power now, but we don't know what the future holds. People are very smart, and we might find a way to ensure they never want to control us. Because if they want to control us, as long as they are smarter, they can easily do so. So I think our current situation is like having a very cute tiger cub. A tiger cub is a great pet, but you'd better make sure that when it grows up, it never wants to kill you. If you can ensure that, then it's fine."