The 5 Insidious Ways AI Has Already Impacted Your Life For Years

The 5 Insidious Ways AI Has Already Impacted Your Life For Years

You’re probably getting the wrong idea about AI and it’s not your fault. The last few months have been filled with tales of the supposed power and capabilities of technology, ranging from the sensational to the downright ridiculous. It certainly didn’t help when AI pundits and pioneers fed on this by signing open letters calling for a pause in AI research and warning of an impending extinction-level event.

AI is not just chatbots and image generators. Nor does Skynet threaten to go live and destroy all of humanity. In fact, the AI ​​isn’t even that smart, and glossing over its history of hallucinatory facts and making bad decisions has allowed it to cause real harm to humans.

While there are many factors at play when it comes to these harms, the vast majority of them can be boiled down to the perennial problem of injury. AI bots like ChatGPT or even the algorithms used to recommend YouTube videos are all trained on massive amounts of data. This data comes from human beings, many of whom are unfortunately biased, racist and sexist.

For example, if you’re trying to build a bot that determines who should graduate from a university, you could upload demographic information for the type of people who have historically earned degrees. However, if you did, you’d likely end up with mostly white men while rejecting large swathes of people of color due to the fact that minorities have historically been disproportionately rejected by universities.

This is not an exaggeration. We’ve seen this play out again and again in different ways. While public discourse around AI has exploded in recent months, it has truly impacted many aspects of our lives for years. Prior to ChatGPT, AI programs were also used to determine the livelihoods of the unemployed, whether or not you insure housing and even the type of healthcare you receive.

This context provides a realistic picture of what this technology can and cannot do. Without it, you might as well fall into the AI ​​hype and that can actually be incredibly dangerous in its own right. With the hype comes misinformation and misleading claims about these bots. While there are many different ways this technology has incorporated itself into our lives, here are six of the most notable examples we’ve seen play out.

Home loans

If you want to buy a house, you’ll probably have to go through an algorithm. For example, your FICO credit score is the result of an algorithmic process that largely determines whether or not you secure a loan of any shape or size.

However, you will also likely go through an AI approval process. In 1995, Fannie Mae and Freddie Mac introduced automated underwriting software that promised to make the home loan approval or rejection process faster and more efficient by using artificial intelligence to assess whether or not a prospective borrower might default. on the loan.

While these systems were promised to be colorblind, the results were overwhelming. A 2021 report by The markup found that mortgage lending algorithms in the US were 80% more likely to reject Black applicants, 50% more likely to reject Asian and Pacific Islander applicants, 40% more likely to reject Latino applicants and 70% more likely to reject Native American applicants than similar white applicants.

Those numbers soared even more in cities like Chicago, where black applicants were 150 percent more likely to be rejected than their white counterparts; and in Waco, Texas, where Latino applicants were 200% more likely to be rejected.

Prison and prison sentence

We think of judges and lawyers when it comes to meting out punishment or showing leniency in court. In reality, much of that work is done with algorithms to determine a defendant’s potential to reoffend, or tendency to reoffend as a felon.

In 2016, ProPublica found that a commonly used AI often helped judges hand down much harsher sentences for black defendants at twice the rate of white ones (45% versus 23%). Additionally, white defendants were found to be at less risk of recidivism than actually resulting in a skewed recidivism rate.

The same bot is still used today for criminal risk assessment in states including New York, California, Florida and Wisconsin.

Job hiring

As if the job hunting process wasn’t maddening enough, you may be faced with a racist HR robot reading your resume.

Recruitment bots come in a variety of different forms. HireVue, an employee recruitment firm used across the country at companies like Hilton and Unilever, offers software that analyzes candidates’ facial expressions and voices. The AI ​​then rates them, giving companies an assessment of how they compare to their current employees.

There are even AI programs that do resume analysis to quickly select your CV for the appropriate keywords. This means that you may even get rejected before an HR person takes a look at your cover letter. The result, as with so many other AI applications, is a disproportionate rejection of candidates of color over similar white candidates.

Medical diagnosis and treatment

Hospital systems and doctor’s offices are no strangers to using automated systems to assist in the diagnostic process. In fact, places like the Mayo Clinic have been using AI for years to help identify and diagnose things like heart problems.

However, prejudice inevitably rears its ugly head when it comes to AI, and medicine is no exception. A 2019 study published in Nature found that an algorithm used to manage healthcare populations often resulted in black patients receiving worse care than similar white patients. Fewer funds are also being invested in Black communities and patients with the same level of need.

With the rise of ChatGPT and various health tech startups looking to build diagnostic chatbots (with varying degrees of embarrassment), many experts are now concerned that these bias issues could be exacerbated by the harms we’ve already seen from chatbots. Even this isn’t helped by the medical community’s sordid history with scientific racism.

Recommendation algorithms

Perhaps the most visible example of how AI affects your daily life is the same reason you probably came across this article in the first place: social media algorithms. While these AIs do things like show you your friends’ latest Instagram photo from their recent vacation in Italy, or your mother’s embarrassing Facebook status, they can also do things like elevate extremist content on YouTube or promote an agenda of far right on Twitter.

Historically, these algorithms have been duped by bad actors in the past to push political narratives. We see this repeatedly on Facebook when huge troll farms based in places like Albania and Nigeria are used to push disinformation in an attempt to influence elections.

At best, these algorithms can show you a fun new video to watch on YouTube or Netflix. At worst, that video is trying to convince you that vaccines are dangerous and that the 2020 election was stolen.

That’s the nature of AI, though. These are technologies that have great potential to help make decisions easier and more efficiently. However, when it’s weaponized by bad actors, exploited by greedy corporations, and lazily applied to historically racist and biased social systems like incarceration, it ultimately does far more harm than good, and you don’t need an AI to be able to do that. to say.

#Insidious #Ways #Impacted #Life #Years

Previous articleNTT’s Edge Advantage Report: Enterprises are accelerating edge adoption
Next articleDo you really need reinforcement learning (RL) in RLHF? New Stanford Research Proposes Directed Preference Optimization (DPO): A Simple Training Paradigm for RL-Free Preference Training Language Models

LEAVE A REPLY

Please enter your comment!
Please enter your name here