Skip to content
LIVE
Loading prices...
Shocking Study Reveals AI Companies Leak Sensitive Data on GitHub

A picture of a compromised robot

Shocking Study Reveals AI Companies Leak Sensitive Data on GitHub

In Brief

  • • A new study by Wiz found that 65% of major AI companies have accidentally exposed sensitive data such as API keys and credentials on GitHub.
  • • These leaks risk granting unauthorized access to private AI models and other critical systems, potentially compromising security.
  • • Experts warn that the rush to release AI products is causing dangerous security oversights that companies must urgently address.

A shocking study has revealed that major AI companies have been leaking sensitive data on GitHub. The study conducted by Wiz focused on companies in the Forbes AI 50 list. 65% of the companies worth over $400 billion in all were found to have left important secrets online.

Ad

Findings from the study reveal that the companies leave behind information such as API keys, authentication tokens, and other credentials.

Although the information is not leaked on purpose, it exposes the companies to risks that can trickle down to users.

Giving Bad Actors Access to Models

Private AI models are supposed to be accessible only to the teams responsible for their development to ensure they are secure and not easily compromised.

Ad

Leaving such critical information on GitHub however grants unauthorised persons and potential bad actors access to key information such as training data and other system details. The report said:

“AI companies are racing ahead, but many are leaving their secrets behind. We looked at 50 leading AI companies and found that 65% had leaked verified secrets on GitHub. Think API keys, tokens, and sensitive credentials, often buried deep in deleted forks, gists, and developer repos most scanners never touch.”

Among lost secrets are one company’s leaked Hugging Face token which provided access to approximately 1,000 private AI models, making them easy to be infiltrated later.

AI Companies Must Step Up Security Consciousness

AI is used by most people these days and harvests a ton of information from every user. Because of this, AI companies need to be more conscious of security.

Even major AI models like ChatGPT are not left out of the risk of compromise. Just recently, hackers hijacked its latest update to leak users’ emails, spreading fear.

According to Wiz, these vulnerabilities occur due to teams rushing to launch products at the detriment of security. 

That attitude must change to ensure costly mistakes are not made such as leaking data on GitHub.

More Must-Reads:

How do you rate this article?

Join our Socials

Briefly, clearly and without noise – get the most important crypto news and market insights first.