Breaking News
  • Congratulations to Jadie Hwang for placing 2nd in the WSPTA Relections in Literature!
  • Congratulations to the Inglemoor Crew Women’s 4+ boat for winning 5th at the US Rowing Youth National Championships!
  • Good luck to Jackie Jones, Maggie Cowan, Paige Stewart and Lauren Vesely who are competing at the Nike Outdoor Nationals from June 14-15 at the University of Oregon’s Hayward Field!
  • Congratulations to National Merit Scholarship Recipient Senior Bennett Ye!
  • Congratulations to senior Ava Espiritu who won the League of Women Voters’s video contest!
The student news site of Inglemoor High School

Nordic News

The student news site of Inglemoor High School

Nordic News

The student news site of Inglemoor High School

Nordic News

U.S. AI regulations should follow E.U. example

U.S.+AI+regulations+should+follow+E.U.+example

The U.S. and the E.U. have been equally crucial in leading global AI risk management. However, compared to the E.U., the U.S. sorely lacks regulatory legislation. After more than 80 AI-related bills failed to pass Congress, President Joe Biden signed an executive order in Oct. 30, 2023 called the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in an attempt to fill in the gaps. The directive focuses on establishing new standards for AI safety, protecting Americans’ privacy and promoting innovation. It also requires safety test results to be sent to the U.S. government, develops tools and guidelines to manage AI safety and transparency and develops a National Security Memorandum to direct future actions on AI. Despite all this, shortcomings such as excessive paperwork requirements and lack of enforcement of AI standards threaten the executive order’s effectiveness. The E.U., conversely, has more comprehensive legislation to manage specific situations. They recently passed their AI Act, which categorizes AI use into different risk levels and regulates them accordingly. The U.S. should follow in their footsteps and incorporate similar AI regulation policies.

The AI Act splits AI use into four categories: unacceptable, high, limited and minimal risk. Unacceptable risk, such as the usage of hidden or deceptive techniques to influence behavior and real-time facial recognition in public, is completely prohibited with minimal exceptions for facial recognition. High-risk systems like autonomous vehicles, medical devices and critical infrastructure machinery are strictly regulated. Limited-risk AI requires transparency through labeling or disclosures, and includes chatbots and deepfakes. Lastly, minimal-risk AI, such as spam filters and video games, are unregulated. Offenders who create prohibited AI are fined up to the equivalent of $37,644,250, or 7% of their past year’s total income. Offenders of all other categories are fined up to the equivalent of $16,132,575, or 3% of their past year’s total income.

While undeniably useful in many circumstances, AI poses severe safety risks for users. Specifically, AI may threaten rights such as the right to non-discrimination, freedom of expression, human dignity, personal data protection and privacy. Deepfakes, which use AI to fabricate images and videos of events, have been used to manipulate public figures during elections. In January, a robocall from New Hampshire impersonated Biden, saying that Democrats shouldn’t vote in the upcoming primary election. According to New Hampshire Attorney General John Formella, the robocall reached anywhere from 5,000 to 25,000 people and intended to coerce voters to abstain out of fear of losing their right to vote. Three voters who received the call and the League of Women’s Voters filed a federal lawsuit against the political operative and two companies behind the robocall, seeking up to $7,500 in damage.

While faking robocalls isn’t new, generative AI makes it much easier. Generative AI has also created realistic deepfakes of political figures like Biden, Trump and Clinton, spreading fake endorsement videos and lies. Biden’s executive order is clearly insufficient compared to the AI Act’s clear transparency requirements and penalties. AI is actively threatening democracy, and so, new legislation needs to pass Congress.

Additionally, vivid image deepfakes of the Israel-Hamas war have been used to spread misinformation. Images that have circulated social media have spread fake news and enraged viewers. The Israel-Hamas war is a very real tragedy, but deepfakes muddle the truth to heighten emotions. The social media platform X, formerly Twitter, has especially seen an increase in misinformation, since it promotes people who use its blue checkmark verification system even though the content the people spread may be deepfakes. Not all deepfakes are the product of AI, but AI’s convenience makes it a dangerous tool of widespread manipulation.

AI bias, where the use of data used to train an AI is incomplete, skewed or invalid, is also dangerously prominent in the US. An example of AI bias is security data compiled from predominantly black regions which could create racial bias in AI police tools. Building biases into AI through algorithmic or cognitive bias, intentional or not, also changes how data collected by AI is weighted. While the AI Act sets clear standards for creating ethical datasets documentation of the AI’s functionality and risk-mitigation measures, the executive order’s standards require companies to regularly report to the government and share the results of their red-team safety tests. Both aim to promote AI safety, but the executive is barely complied with due to its excessive requirements. Its 94 mandatory deadlines of reports and tasks, all due within a year, buries government agencies in AI homework. In short, it is too ambitious and unrealistic compared to the much briefer but still comprehensive AI Act.

The EO is a step in the right direction, as it introduces a baseline, but more steps need to be taken. More comprehensive legislation that enforces the standards outlined in the EO needs to be passed. It’s evident that the U.S. is overdue for new AI regulations, and the AI Act is a prime example of legislation that the U.S should follow.

Leave a Comment
More to Discover
About the Contributor
Callie Tse (she/her)
Sophomore Callie Tse is a first-year reporter. She is excited to learn more about journalism and increase awareness of events, clubs, and little-known details all over the school. She also hopes to improve her writing skills and learn better to collaborate with others in the field of writing. In her free time, Callie enjoys playing piano, reading good books, and playing badminton with family and friends.

Comments (0)

Please leave your name and email when commenting. Harmful or spam comments will be removed. Visit the comments policy tab for more info.
All Nordic News Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *