1,000 AI experts in the field, including Elon Musk, the CEO of Twitter and Tesla, and the co-founder of Apple, Steve Wozniak, have asked for a temporary stop in AI technology development until safety measures have been put in place.
According to Forbes, more than 1,000 AI experts and tech leaders have signed an open statement requesting a brief halt to the creation of advanced AI systems. Heavy hitters in the business like Elon Musk and Steve Wozniak are on this roster. The letter, was written by the Future of Life Institute, asked for a ban on AI systems more powerful than OpenAI’s GPT-4 for a minimum of six months in order to create greater control and shared safety procedures.
The publication of the letter was made achievable by the Future of Life Institute, which is a nonprofit group dedicated to directing revolutionary technology toward improving life and reducing serious risks. Prominent scholars from Google-owned DeepMind and popular machine learning authorities like Stuart Russell and Yoshua Bengio are among the participants.
The open letter requests independent specialists and AI research facilities to work together on creating and applying a set of standard safety regulations for the planning and creation of advanced AI. To ensure that AI systems adhering to them are risk-free, these protocols would be thoroughly audited and watched by outside specialists who are unaffiliated with the company. The authors stress that the proposed break is only a temporary diversion from the hazardous race toward ever-more uncertain black-box models that have emergent capabilities, not a complete stop to AI development.
The memo urges the creation of stronger governance systems in addition to safety procedure formulation. To distinguish between authentic and fake material and to keep track of model leaks, these systems ought to include provenance and watermarking systems as well as new governing organizations devoted to AI supervision and tracking. The professionals suggest increasing public financing for a technical investigation into AI safety and holding suppliers accountable for AI-related damage.
Even though it is doubtful that the open letter will succeed in achieving all of its goals, but it does represent a general unease regarding AI technologies and emphasize the need for stricter regulation. In the same way that society has previously halted the development of other technologies that might have catastrophic consequences, the letter’s authors contend that AI development should also be halted.