0
Please log in or register to do it.

OpenAI and Google DeepMind are among the leading technology companies building artificial intelligence (AI) systems and capabilities. But several current and former employees of these organizations have now signed an open letter claiming there is little to no oversight in the construction of these systems and not enough attention is being paid to the key risks posed by this technology. The open letter was endorsed by Geoffrey Hinton and Yoshua Bengio, two of the three ‘godfathers’ of AI, and seeks better whistleblower protection policies from employers.

OpenAI, Google DeepMind employees demand right to warn about AI

The open letter states that it was written by current and former employees of major AI companies who believe in AI’s potential to provide unprecedented benefits to humanity. It also points out the risks posed by technology, including reinforcing social inequality, spreading misinformation and manipulation, and even losing control over AI systems that could lead to human extinction.

The open letter highlights that the autonomous structures implemented by these tech giants are not effective in scrutinizing these risks. It also argued that “strong financial incentives” encourage companies to overlook the potential risks that AI systems could pose.

The open letter questions the intention of AI companies to take corrective action, arguing that they already know AI’s capabilities, limitations, and varying levels of risk of harm. “They currently have little obligation to share some of this information with governments and none with civil society. “I don’t think we can trust them all to share it voluntarily.”

The open letter includes four demands from employers: First, employees want companies to avoid entering into or enforcing contracts that prohibit criticism of risk-related issues. Second, we requested a verifiable, anonymous process for current and former employees to raise risk-related concerns with company boards, regulators, and appropriate independent organizations.

Employees also urge the organization to develop a culture of open criticism. Finally, the open letter emphasizes that employers should not retaliate against existing and former employees who publicly share risk-related confidential information after other processes have failed.

The letter was signed by a total of 13 current and former employees of OpenAI and Google DeepMind. In addition to the two ‘godfathers’ of AI, British computer scientist Stuart Russell also supported this move.

Former OpenAI employee speaks on AI risks

Daniel Kokotajlo, one of the former OpenAI employees who signed the open letter, also made a series of posts on X (formerly Twitter), highlighting his experience at the company and the dangers of AI. He claimed that when he resigned from the company, he was asked to sign a non-disparagement clause to prevent him from making critical comments about the company. He also claimed that the company threatened to strip Kokotajlo of his vested rights for refusing to sign the contract.

Kokotajlo argued that neural networks in AI systems are growing rapidly from the large data sets that feed them. He added that there were no adequate measures in place to monitor the risk.

He added, “There is much we do not understand about how these systems work and whether they will continue to serve human interests even as they become increasingly smarter and surpass human-level intelligence in all areas.”

In particular, OpenAI is writing a Model Spec, a document that aims to provide better guidance for companies to build ethical AI technology. Recently, a Safety and Security Committee was also established. Kokotajlo applauded this commitment in one of his posts.

Oppo F27 Pro+ 5G India launch date set
HTech cooperates with nStore to provide honorable service

Reactions

0
0
0
0
0
0
Already reacted for this post.

Reactions

Your email address will not be published. Required fields are marked *