AI may likely develop goals misaligned with human survival. The authors advocate for an immediate halt to training increasingly powerful AI models and propose a global treaty for governance.
Fwiw, it's not craftsmanship as the original articles may have suggested in the post, but training the AI. There is a difference.
It's terrifying, if we get this genie out of the bottle. It's likely the humans will disappear. Why, because AI does not have the same goals as humans have. Long gone are the days of Asimov's rules on robotics as in his 1942 novel 'Runaround'. Just sayin'
Responses
« Back to index | View thread »