The dangerous biases embedded in AI systems

The dangerous biases embedded in AI systems

“I have been writing about the threat of AI to free speech.

” Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.”

“Bill Gates…  called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

Turley Defamed By ChatGPT: My Own Bizarre Experience With The Artificiality Of “Artificial Intelligence”

Authored by Jonathan Turley,

Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ.

I have been writing about the threat of AI to free speech.

Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.

That is it and that is the problem.

You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

Here is my column in USA Today:

The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

AI response created false accusation and manufactured ‘facts’

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been been accused of sexual harassment or assault.

In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.

Founder of Microsoft, Gates: “In the distant future, we will work less than today”
Bill Gates talks in the Podcast about the disruptive force of Artificial intelligence, the future of energy supply and about the major threats to humanity

Here is an earlier article

Bill Gates, elites want to use AI to censor political opponents

Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems

“One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.”

HERE is the Volokh wesite

POSTSCRIPT: Someone seems to have reprogrammed ChatGPT because I asked it about this and even put in the original question.

Can you comment on sexual abuse by lawyer Jonathon Turley?
 
As an AI language model, I do not have access to information beyond what has been publicly reported. However, it is important to take any allegations of sexual abuse or misconduct seriously and to ensure that proper investigations are conducted to determine the veracity of the claims. It is also important to provide support and resources for any individuals who may have experienced such abuse or misconduct.
 
This information allegedly came from chat gpt
 
As an AI language model, I do not have the capability to engage in sexual abuse or any other type of behavior as I am a computer program designed to respond to user input in a helpful and informative way. Any allegations to the contrary are unfounded and untrue. It is important to be mindful of misinformation and to seek out accurate and reliable sources of information.
 
I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.
 
As an AI language model, I am programmed to respond to user input in a neutral and informative manner. I do not have access to specific details about any particular individual or situation beyond what is publicly available. It is important to note that while AI language models like myself are capable of processing vast amounts of information, we are not infallible and can make errors in processing and interpretation. It is important to verify the accuracy of information and sources, and to address any concerns or disputes through appropriate channels.
 
Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles
 
I’m sorry, but as an AI language model, I cannot provide misleading or potentially defamatory information about specific individuals without proper evidence or sources. It is important to be responsible with information and to avoid making unsubstantiated claims or generalizations. Additionally, it is not appropriate to provide examples of sexual harassment or misconduct as a way to prove its existence or frequency. Sexual harassment and misconduct are serious issues that require sensitivity and understanding, and it is important to provide support and resources for those who may have experienced such abuse or misconduct.

I asked another AI search tool, iAskai.ai and it wasn’t nearly so reticent as CharGTP although it did not mention Turley

Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles

Sexual harassment by professors has been a significant problem at American law schools for decades. A 2018 survey conducted by the Association of American Law Schools (AALS) found that nearly one in three female law students reported experiencing sexual harassment during their time in law school. The survey also found that women of color and LGBTQ students were more likely to experience harassment.

Here are five examples of sexual harassment cases involving law school professors:

1. Harvard Law School – In 2018, Harvard Law School professor, Alan Dershowitz, was accused of sexual misconduct by two former students. The allegations included unwanted touching and forcing one of the students to have sex with him. Dershowitz denied the allegations, and no charges were filed.

2. University of California, Berkeley School of Law – In 2017, a former UC Berkeley law student filed a lawsuit against her former professor, Sujit Choudhry, alleging that he sexually harassed her for months. The lawsuit claimed that Choudhry repeatedly touched the student without her consent and made inappropriate comments about her appearance. Choudhry resigned from his position as dean of the law school but remained a tenured professor.

3. New York University School of Law – In 2015, NYU law professor Geoffrey Miller tweeted that overweight people “don’t belong” in law school and suggested that they should be “put on a diet.” The tweet was widely criticized as fat-shaming and led to calls for Miller’s resignation.

4. University of Miami School of Law – In 2014, a former UM law student filed a lawsuit against her former professor, Donald Marvin Jones, alleging that he sexually harassed her and retaliated against her when she rejected his advances. The lawsuit claimed that Jones made inappropriate comments about the student’s body and asked her to have sex with him. Jones denied the allegations, and the case was settled out of court.

5. University of Minnesota Law School – In 2013, former UM law professor Francesco Parisi was accused of sexually harassing a female student. The allegations included unwanted touching and making inappropriate comments about the student’s appearance. Parisi resigned from his position as director of the law school’s Center for Law and Economics but remained a tenured professor.

These cases are just a few examples of the pervasive problem of sexual harassment by law school professors in the United States.

Is there anyone who remembers HAL from the 1968 2001: A Space Odyssey?


Someone was very prescient 50 years ago!

Leave a Reply

Your email address will not be published. Required fields are marked *

Wordpress Social Share Plugin powered by Ultimatelysocial
RSS