Monday, January 22, 2024

OpenAI's new election rules are already being put to the test

An illustrated magnifying glass hovering over the OpenAI logo.

Months away from the 2024 election and less than a week after AI's biggest name pledged to help uphold a fair democratic process, developers are testing those promises.

On Jan. 20, according to the Washington Post, OpenAI banned the team behind Dean.Bot, a ChatGPT-powered chatbot intended to spur interest in long shot democratic candidate from New Hampshire Rep. Dean Phillips. The company cited a failure to adhere to the company's usage guidelines, writing to the Washington Post: "Anyone who builds with our tools must follow our usage policies. We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent."

The bot — which was removed shortly after the publication published a story on its launch, but not before developers tried keeping it up with other APIs — was created by Delphi, an AI startup commissioned by a relatively new Super PAC known as We Deserve Better, founded by Silicon Valley entrepreneurs Matt Krisiloff and Jed Somer.

The bot allowed potential voters to "converse" with Phillips and hear his campaign messages. The initial exchange was prompted with a screen disclaimer that the bot was not real and a part of the We Deserve Better. The website now alerts visitors with an out-of-order message: "Apologies, DeanBot is away campaigning right now!”

The bot's removal is one of the first public takedowns since OpenAI released new election season commitments, hinting at an out the gate effort to curb campiagn information using OpenAI's tech as soon as it goes public.

On Jan. 16, the company shared its full plan for addressing AI's place in this year's presidential election — what some are dubbing a political and technological flash point in a battle over AI misinformation. OpenAI announced new usage policies and commitments to fostering election integrity, including:

  • Greater transparency on the origin of images and what tools used to create them, including the use of DALL-E.

  • Updates to ChatGPT's news sources and the inclusion of attributions and links in responses.

  • A partnership with the National Association of Secretaries of State (NASS) to build accurate voting information into select procedural questions.

OpenAI already has a policy in place that prohibited developers from building applications for political campaigning and lobbying or creating chatbots that impersonate real people, including candidates or government entities. The company also prohibits applications that "deter people from participation in democratic processes," such as inaccurate voting information or eligibility.

The upcoming election has stoked even greater concern of technology's role in sharing information and galvanizing voting blocs, and AI has stood out as a looming gray area in many social media companies' guidelines. Many watchdogs and advocates (as well as the FCC) worry about the potential of AI voice cloning, others still ring the alarms on increasingly affective deepfakes.

In December, two nonprofits published a report documenting that Microsoft's AI chatbot Copilot failed to provide accurate election information and spread misinformation.

In response, some are choosing to build more robust policies for political campaigning, specifically, like those announced by Google and Meta last year. But full content removal, and the repercussions of AI-generated contented on already susceptible consumers amid worsening media literacy, are still a point of contention.



from Mashable https://ift.tt/CnGtQ7M
via IFTTT

0 comments:

Post a Comment

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Laundry Detergent Coupons