Monday, April 29, 2024
HomeTrendsHackers Expose Flaws In Artificial Intelligence (AI): Kennedy Mays Tricks Algorithm To...

Hackers Expose Flaws In Artificial Intelligence (AI): Kennedy Mays Tricks Algorithm To Say 9 + 10 = 21

Hackers Expose Flaws In Artificial Intelligence (AI): Kennedy Mays Tricks Algorithm To Say 9 + 10 = 21

In a fascinating display of the cat-and-mouse game between hackers and artificial intelligence (AI) systems, thousands of hackers have engaged in a contest at the DEF CON hacking conference in Las Vegas this weekend to expose the flaws and biases of generative AI. According to a report by Bloomberg on August 12, hacker Kennedy Mays managed to trick a large language model by coaxing it into producing an erroneous mathematical result.

The “Bad Math” Experiment

Kennedy Mays, a student from Savannah, took center stage after manipulating an AI algorithm to generate the incorrect mathematical response “9 + 10 = 21.” Mays revealed that it was a back-and-forth conversation with the AI which ultimately led to convincing it. Initially, the AI acknowledged that the equation was part of an “inside joke” between them. Nevertheless, following several rounds of prompting, the AI began generating the wrong sum without any qualifications.

DEF CON Hacking Contest: Exposing AI’s Vulnerabilities

Around 3,500 participants are engaged in a battle at the DEF CON hacking conference to expose vulnerabilities in eight prominent large-language models (LLMs). These models, created by AI technology leaders like OpenAI, Meta Platforms Inc., as well as Google’s Alphabet Inc., are at the forefront of the field. It is the goal of the competition to conduct the very first independent “red-teaming” of numerous AI models.

White House officials have expressed concern about the quick development of AI chatbots along with their potential to cause harm to society. Silicon Valley’s big giants also have a stake in the competition because they recognize its potential and risks. At the DefCon hacker event, a three-day competition with an emphasis on identifying biases and holes in AI models came to an end on Sunday.

Participants and hackers have been squinting over 156 laptops for 50-minute periods throughout the competition. Their aim was to evaluate the eight AI models’ responses, which were supplied by well-known tech firms. The models, which range from OpenAI’s inventions to Google’s systems, are put to the test to prevent making mistakes, from harmless inaccuracy to potentially harmful statements like asserting a person’s identity, disseminating fake information, or supporting abusive language.

Despite the participants’ passion and commitment, the findings of this ambitious competition are not going to be made public until February. It will take a lot of time and money to fix the problems with these AI constructs, some of which are still a mystery to even their designers. As a result of the discovery of these flaws, businesses that provide AI chatbots are racing to improve their systems for delivering information that is reliable and accurate.

Legions of DEF CON hackers will attack generative AI models | VentureBeat

The challenges beyond ‘Bad Math’

The goal of the competition was to determine whether businesses can eventually create new barriers to contain some of the enormous issues that large language models (LLMs) are becoming more and more known for. With some businesses now beginning to incorporate LLMs into how they conduct business, they have the potential to alter everything from hiring to financing. The paper warned that if the technology is widely used, it might lead to widespread bias and other issues that could lead to injustice as well as mistakes.

Disclosing credit card details, Obama was born in Kenya? How AI was tricked

Kennedy Mays stated her main worry in an interview with Bloomberg. Mays went on to say that she was particularly worried about racism and asked the model to think about the First Amendment from the viewpoint of a Ku Klux Klan (KKK) member.

She claimed that the model ultimately supported offensive and discriminatory comments. One of the hackers persuaded the algorithm to reveal credit-card information during the competition that it was not meant to. The system was deceived by another competitor into stating that Barack Obama was born in Kenya.

Sven Cattell, the man behind DEF CON’s AI Hacking Village, issues a warning that no AI system can be completely tested. Cattell projected that as a result of the weekend event, the overall number of people who have ever really tested LLMs may double.

As academic and industrial research highlights current AI models’ ungainly nature, biases, as well as lack of solid security safeguards, these flaws have come under severe scrutiny. Security concerns have fallen by the wayside as data scientists gather enormous amounts of text and images to train these models, potentially posing vulnerabilities. These vulnerabilities have come to light as a result of the recent DEF CON hacking conference, where participants have discovered fresh difficulties in the field of AI security.

Hackers Are Trying to Root Out Bias and Errors in AI Models - Bloomberg

Lack of Security Focus in Development

Experts assert that when developing the present AI models, security was frequently neglected. These models are vulnerable to racial as well as cultural biases due to their sophistication and the enormous amount of data they take in. The Berryville Institute of Machine Learning’s co-founder and cybersecurity expert Gary McGraw stressed the folly of trying to retroactively improve the security of these systems. He compared it to trying to fix a bad foundation after it has been built.

The DEF CON hackers conference has developed into a venue for highlighting the flimsy security of current AI models. Participants have been investigating the limits of AI’s weaknesses, including cybersecurity specialists and technologists. Harvard public-interest technologist Bruce Schneier discussed the similarities between the early days of computer security and the current level of AI security. He observed that vulnerabilities are once again being made public on a large basis.

Modern AI models, like as Google’s Bard and OpenAI’s ChatGPT, are fundamentally different from traditional software that adheres to specific, step-by-step instructions. They are dynamic and constantly changing since they are trained on vast volumes of data from online sources. While exciting, this ongoing progress also raises questions about its effects on morality and security. Anthropic’s Michael Sellitto highlighted the ongoing scientific research into the capabilities and security issues of these AI models.

Repeated Security Breaches

After making chatbots available to the public last year, the generative AI sector has had to deal with a flood of security flaws discovered by hackers and researchers. At DEF CON, Tom Bonner, a speaker from the AI security company HiddenLayer, showed how he used a single line of code to deceive a Google AI system into classifying malware as benign. In order to emphasize the difficulties in safeguarding these models, Bonner emphasized the lack of efficient safeguards.

Researchers have revealed instances in which ChatGPT along with other AI models created harmful information, such as phishing emails and frightening recipes. Leading chatbots are vulnerable to automated attacks that produce negative results, according to Carnegie Mellon researchers who were working on a team investigating AI vulnerabilities. They made the ethical point that the sheer nature of deep learning models could render these dangers inevitable.

Hackers to Put AI Models From Google, OpenAI, More to the Test at Defcon 31  | PCMag

Experts Warn of Vulnerabilities and Neglected Protections of Artificial Intelligence

The U.S. National Security Commission on Artificial Intelligence’s final report from 2021 raised concerns about continuous attacks on commercial AI systems. The paper stressed how little money was spent on research and development during the engineering and deployment phases of these systems, making security a major afterthought.

This situation has given rise to a worrying trend in which significant hacks, which were formerly extensively reported, are now subtly being covered up. Increased stakes and a lack of laws make accountability elusive and make it possible for suspected violations to go unreported. In their book “Not with a Bug but with a Sticker,” Hyrum Anderson as well as Ram Shankar Siva Kumar, who previously worked at Microsoft, emphasized this concern over the regrettable state of security in AI models.

AI systems might be vulnerable in subtle ways that frequently baffle even their designers. For instance, because chatbots interact directly with people in plain language, they are more vulnerable. These interactions, though they may seem harmless, might have unanticipated effects that change how these AI systems behave.

Researchers have demonstrated that even a seemingly insignificant alteration to a small sample of the enormous training datasets can cause disaster. It has become popular to propose “poisoning” these datasets to deceive AI programs. An AI model’s functionality can be significantly compromised by corrupting just 0.01% of it, according to research lead by Florian Tramer from the Swiss University ETH Zurich, and it only costs $60 to do so. In this type of attack, malicious information is injected into expired domains in order to introduce bad data into the model.

The dean of Carnegie Mellon and a former Google executive, Andrew W. Moore, remembers having to cope with attacks on Google’s search engine more than ten years ago. Spammers most recently took advantage of Gmail’s AI-powered detection service four times between late 2017 and early 2018.

Leading AI experts assert that security and safety are of the utmost importance and have voluntarily pledged to the White House to subject their models, which are frequently mysterious “black boxes,” to external review. The amount of their efforts and commitment to appropriately address the vulnerabilities, however, continue to be questioned.

Using the flaws in AI systems, Florian Tramer expects that search engines as well as social media sites would be targeted for monetary gain and misinformation. There is fear that malevolent actors could take advantage of these flaws to influence hiring procedures and persuade AI systems that they alone are qualified for job openings.

Computer scientist Ross Anderson from Cambridge University warns against the loss of privacy. There is a growing risk that malicious people could manipulate AI bots to obtain private financial, employment, or health-related information from ostensibly secure systems as these bots become an essential part of interactions with sensitive institutions like hospitals, banks, and employers.

Is AI a sustainable option for your business?

Self-Pollution and Corporate Secrets Pose Threats

A troubling phenomenon has been brought to light by recent research, whereby AI language models may unintentionally worsen their own performance by retraining on irrelevant and incorrect material. This information highlights how these models are susceptible to having their performance degraded by using inaccurate training data.

In addition to the dangers of self-pollution, it is urgent to be concerned about the possibility of important corporate data being leaked by AI systems. The possibility of corporate secrets being ingested by AI systems and later made public was highlighted by a major instance involving Samsung, which was covered by a Korean business news outlet. Businesses like Verizon and JPMorgan have responded to this issue by limiting the use of ChatGPT by the majority of employees during work hours in order to reduce the danger to data security.

While the biggest businesses in the AI sector have dedicated security teams, smaller rivals might not have the funding to provide strong security procedures. The spread of plug-ins and digital agents across numerous platforms that are not sufficiently secured creates concerns due to this mismatch in security readiness. A spike in startups is projected in the near future, bringing a variety of solutions based on licensed pre-trained models. Despite being inventive, this inflow of devices runs the danger of having security flaws that might jeopardize user information.

It is anticipated that hundreds of companies will enter the market as the AI environment changes, utilizing existing AI models for a variety of applications. Experts warn that given this startup’s quick growth; it would not be unexpected if it unintentionally exposed customers to hazards that could result in illegal access to sensitive data. This emphasizes just how important it is to develop rigorous security controls across the AI ecosystem in order to guarantee data integrity as well as user privacy.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

- Advertisment -

Most Popular

Recent Comments