
Click the link below the picture
.
Artificial intelligence isn’t just a niche tool for cheating on homework or generating bizarre and deceptive images. It’s already humming along in unseen and unregulated ways that are touching millions of Americans who may never have heard of ChatGPT, Bard, or other buzzwords.
A growing share of businesses, schools, and medical professionals have quietly embraced generative AI, and there’s really no going back. It is being used to screen job candidates, tutor kids, buy a home, and dole out medical advice.
The Biden administration is trying to marshal federal agencies to assess what kind of rules make sense for the technology. But lawmakers in Washington, state capitals, and city halls have been slow to figure out how to protect people’s privacy and guard against echoing the human biases baked into much of the data AIs are trained on.
“There are things that we can use AI for that will really benefit people, but there are lots of ways that AI can harm people and perpetuate inequalities and discrimination that we’ve seen for our entire history,” said Lisa Rice, president and CEO of the National Fair Housing Alliance.
While key federal regulators have said decades-old anti-discrimination laws and other protections can be used to police some aspects of artificial intelligence, Congress has struggled to advance proposals for new licensing and liability systems for AI models and requirements focused on transparency and kids’ safety.
“The average layperson out there doesn’t know what are the boundaries of this technology?” said Apostol Vassilev, a research team supervisor focusing on AI at the National Institute of Standards and Technology. “What are the possible avenues for failure and how these failures may actually affect your life?”
.
Illustrations by Anna Kim for POLITICO
.
.
Click the link below for the article:
.
__________________________________________
Leave a comment