top of page

Google and OpenAI's Secretive New Programs Raise Ethical Alarms

In a recent development, Google and OpenAI are facing criticism for a lack of transparency in the release of their latest AI programs. 


OpenAI's GPT-4 and Google's Gemini, developed in collaboration with DeepMind, have raised concerns due to the companies' departure from the previously established norm of open disclosure in the AI community. These programs were introduced with minimal technical details, leaving researchers and the public in the dark about crucial information regarding their structure and functionality. 


Google's Gemini, in particular, has drawn attention for omitting standard disclosure elements known as model cards, which provide information about neural networks, including potential harms. The absence of such details is raising ethical concerns, as experts argue that the most powerful and risky AI models are becoming increasingly difficult to analyze, posing potential risks for society.


Additionally, the lack of transparency is further accentuated by Google's failure to include model cards in its technical disclosure. Model cards, initially developed at Google, are a recognized form of standard disclosure in the AI community, providing critical information about neural networks, including potential ethical concerns. 


The omission of model cards in Google's Gemini report raises questions about the future oversight and safety measures for neural networks, as the company vaguely mentions the creation of model cards at some point in the future. The shift toward secrecy by both Google and OpenAI is viewed as a significant ethical issue, leaving the broader community unaware of the inner workings of these advanced AI systems and hindering the ability to assess potential risks and benefits.



bottom of page