An alphabetical soup from U.S. government agencies has taken steps to regulate artificial intelligence (AI). Last year, Congress passed the National Artificial Intelligence Initiative Act, which creates many new AI initiatives, committees, and workflows to prepare the federal workforce, conduct and fund research, and identify and mitigate risks. In November 2021, the White House announced efforts to create a bill of rights for an automated society. And members of Congress are introducing bills such as the Algorithmic Accountability Act and the Algorithmic Fairness Act that aim to promote ethical decisions in AI. At the state level, at least 17 state lawmakers introduced AI legislation in 2021. Bias is a particular risk in AI or machine learning systems, which are not designed to solve a problem according to a set of rules, but to “learn” from examples of what the solution looks like. If there is a bias in the datasets used to provide these examples, an AI system is likely to replicate and amplify that bias. For example, if successful candidates included in the training data have certain characteristics (e.g., gender, demographic, or education profile), there is a risk that candidates whose profile does not meet these criteria may be excluded. Bradford Newman, a Baker McKenzie lawyer who specializes in AI and trade secrets, said the survey results, which are based on responses from companies with annual revenue of at least $10 billion, show executives are making a big mistake. Existing and upcoming regulations could place companies in the legal headquarters if they do not comply with the rules.
With this hustle and bustle of activity, you might think that today there are no legal requirements that involve AI. But you would be wrong. There are many requirements that already touch on AI in books, and some have it all. Here are some U.S. local, state, and federal requirements to be aware of: Experts said this incident shows how AI biases can harm businesses. A biased AI system can damage a company`s credibility and reputation while producing unfair and harmful or unnecessary results. Machine learning algorithms and artificial intelligence systems affect many aspects of people`s lives: news articles, movies to watch, people to spend time with, access to credit, and even capital investment. Algorithms have been empowered to make such decisions and take action for reasons of efficiency and speed. Despite these advances, there are concerns about the rapid automation of workplaces (including professions such as journalism and radiology). A better understanding of attitudes and interactions with algorithms is essential precisely because of the aura of objectivity and infallibility that cultures attribute to them.
This report illustrates some of the shortcomings of algorithmic decision-making, identifies key issues around the problem of algorithmic errors and biases, and examines some approaches to combat these problems. This report highlights the additional risks and complexities associated with the use of algorithmic decision-making in public policy. The report concludes with an overview of approaches to address these issues. The decisions people make about how to present information from AI models can also skew users` interpretations. Consider how internet search engines based on AI models perform better, implying that they are more important or even truer. Users, in turn, may misinterpret the results scored as the “best results” and may never click on the potentially more accurate results because they get a lower priority with each non-click. These biases can shape our understanding of truth and facts. The many ways biases can penetrate AI models can impact automated decision-making applications and become systematically unfair to certain groups of people. You`ve probably seen the headlines.
Online recruitment tools that unfairly hide women and minorities. Facial recognition tools used in law enforcement that misidentify certain demographic groups. Algorithms that put white patients ahead of patients of color in healthcare. While the potential human cost of biased AI is paramount, what are the legal implications of misguided, flawed or discriminatory algorithms for your business? This virtual briefing will clarify where algorithmic bias crosses the line of illegality and what companies can do about it. “And those are still the motivators in the hallways of most companies,” Newman said. “As you can see, bias decreases dramatically, because in most internal systems, the law and HR come last. Issues of bias, whether human or embedded in AI tools, remain important considerations for any employer or consultant looking to navigate this terrain for a client, given the potential legal liability and reputational risk that could impact a company using AI tools or processes (human or otherwise) that have discriminatory effects. Please contact the authors – Al Leiva or Nakimuli Davis-Primer – or a member of Baker Donelson`s Labour Law Group if you have any questions about the effective use of this technology.
It didn`t work that way. When it comes to bias, AI has not reached its potential. The submission or retention of job offers would clearly have significant legal or similar effects. Experts have highlighted five specific ways AI biases can harm a business: “It`s any idea that doesn`t really represent reality,” explained Shay Hershkovitz, SparkBeyond`s head of research for its climate change initiatives and an expert on AI bias. The list goes on, and attention to these problems is rightly increasing. Many leaders are now aware of the hidden and unfair processes in their systems. They recognize that bias can cost their businesses dearly in the form of brand and reputational damage, lost revenue, undermining employee recruitment and retention, and regulatory fines. They want to reduce these risks and try to make their AI a force for good.
Amazon`s drone dilemma. Amazon is struggling to get its drone delivery program off the ground, and some current and former employees fear the company will take “unnecessary risks” to get the project back on track, according to a Bloomberg News study. Amazon now plans to expand its drone delivery tests in cities like College Station, in Texas. and Lockeford, California, with drones flying over the line of the human observation site, according to the report. An Amazon spokesperson told Bloomberg: “No one has ever been injured or injured by these thefts, and any testing will be conducted in accordance with all applicable regulations.” With all the regulations as well as the many research papers on AI bias that have made headlines, executives should already be aware of the risk AI poses to their business. But as Newman explained, many AI projects are run by technologists, and their main concern is whether they can build AI that works and is safe from hackers. Across industries and regions, I`ve seen many examples of AI going wrong. Studies have shown that mortgage algorithms charge higher interest rates to black and Latino borrowers, and blatant cases of recruitment algorithms that exacerbate prejudice against hiring women.
A number of studies on various facial recognition software found that most dark-skinned women were 37% more likely to misidentify than those with lighter skin tones. Widespread use to predict clinical risk has led to race-inconsistent referrals to specialists, perpetuating racial bias in health care. Natural language processing (NLP) models for detecting unwanted speech on the Internet have wrongly censored comments mentioning disabilities, thereby depriving people with disabilities of the opportunity to participate in speech on an equal footing. “We should address the issue of bias and fairness in AI. It`s not only the right thing to do, it`s also useful for business,” Shah said. Many of the companies I work with to reduce AI bias face a common problem: AI models are incredibly complicated and designed to evolve, making them not only a difficult target, but also a moving target. They are also proprietary and contain information that cannot be disclosed, further increasing the challenge of identifying and mitigating potential biases. Based on potential AI-based bias issues, the EEOC has reportedly begun investigating allegations of AI-based hiring discrimination in hiring and hiring.13 These efforts are consistent with similar Federal Trade Commission (FTC) research into automated decision-making processes by financial institutions to enforce the provisions of the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA).
Although adopted in the 1970s, both relate to automated decision-making and have been applied to automatic credit underwriting models for decades.14 DISCLAIMER: Due to the generality of this update, the information contained in this document may not be applicable in all situations and should not be implemented without specific legal advice based on specific situations. Machine learning algorithms and artificial intelligence influence many aspects of today`s life and have acquired an aura of objectivity and infallibility. The use of these instruments brings a new level of risk and complexity in policies. This report illustrates some of the shortcomings of algorithmic decision-making, identifies key issues around the problem of algorithmic errors and biases, and examines some approaches to combat these problems.