Skip to main content

Nathan Rayens, Data Scientist at Data Surge, first became interested in leveraging data for discovery while completing his PhD. He credits his dissertation advisor for supporting his efforts to take on computational projects in parallel with his heavy research workload in biophysics. Nathan found that data-driven insights drastically reduced the timeline from hypothesis to results for his work in the lab, and he quickly realized that data science would provide him with a long-term path that he could continue to explore.

Here, Nathan discusses what organizations need to carefully consider before jumping into the AI race.

Artificial intelligence (AI) has reached critical mass and has become ubiquitous across industries. With ever-increasing use cases, questions arise about the responsible use of AI. This topic is now at the forefront of government initiatives and regulatory efforts around the world.

As businesses delve into implementing exciting AI tools, they should first consider the risks associated with this rapidly changing field and how to best mitigate consumer concerns about human-AI interactions. Here, we’ll discuss five critical components of Responsible AI and what elements your business should consider as you plan your next steps.

1. Human-Centered Design

The most fundamental fear with advances in AI is how it will affect human quality of life. For instance, if someone wants to open a line of credit at their bank, is it fair to be denied without further explanation just because an AI model (trained on data you can’t see) says you don’t qualify? Many governments are trending toward “No.” Similarly, we have all experienced the
frustration of dealing with a chatbot that doesn’t understand your questions or can’t complete the tasks you’re giving it. In this same way, shouldn’t you be able to promptly escalate your request to an actual human representative?

Fortunately, mitigating these concerns is relatively simple, especially when compared to more technical problems. Per recent recommendations from the White House, businesses should acknowledge when AI is used in decision-making, and they should present a summary of significant inputs and impacts. They should also inform users that real data backs these automated decisions and give some insight into what happens behind the scenes. Additionally, suppose a user is running into problems with an AI product. In that case, it is reasonable to expect that the company furnishing the product will offer timely support from a real person.

Key Takeaway: These adjustments do not entirely remove apprehension about AI-human interaction. Still, they are a fundamental first step toward reasonably approaching these interactions and are likely to be required in future government regulations.

2. Fairness, Bias, and Discrimination

This section details some of the harder-to-implement human-AI interaction guard rails mentioned above. Like any modeling, biases can be introduced during the AI training process. For example, if we trained a chatbot solely on highly dogmatic or prejudiced writings and then asked it to answer a wide variety of questions, we would likely find that its answers would be colored by the source material. Additionally, if we create a facial-recognition tool and train it only on male faces, it may struggle to perform appropriately on female faces.

These biases can be challenging to identify in complicated systems, but they are already subject to civil protections, and these protections are likely to grow as governments become more informed. For example, iTutorGroup recently settled a lawsuit for $365,000 for violations noted by the Equal Employment Opportunity Commission (EEOC). iTutorGroup’s job application software explicitly and automatically rejected older applicants based on age, a blatant violation of U.S. worker protections.

While this may be a simpler scenario than most AI implementations, it is almost certainly the first of many similar lawsuits. AI product providers have a responsibility to evaluate their services closely to mitigate where algorithmic bias is found. One of the most important tools for uncovering bias is causality analysis—knowing that your AI-informed decisions are made based on a causal relationship rather than an associative one will offer some protection from legal exposure.

Key takeaway: Without true intuition, AI is highly susceptible to bias; however, the fundamental truth is that if your AI product handles different people in different ways because of demographic features, you may be perpetuating discrimination and taking on legal risk.

3. Transparency

AI systems are sometimes so convoluted that it’s challenging to understand the decision-making process. Put simply, when all the computational/decision-making steps occur in a black box, the end-user cannot truly understand how we got from point A to point B.

It’s imperative that adopters of AI strive for transparency in their methods for the benefit of both their internal and end customers.

There is not just one method for achieving transparency in AI. For example, we recently worked on a Retrieval Augmented Generation (RAG) accelerator project using a “thought process” approach. The documents in our reference library most closely matching a user’s input query are ordered and presented to the user in a secondary panel prior to providing the final answer. This allows the user to see exactly where the model pulled its answers from and speculate on why certain details in the reference library were more closely related to the query than others. Alternatively, attention methods, key parameter weightings, and feature importance can all be used to bolster transparency because they help internal and external users understand
numerically how complicated decisions are being made.

AI transparency can be achieved in a myriad of ways, but it must be at the forefront of product design decisions in these earliest stages of AI adoption. There are tangible benefits from the perspective of improving model performance when technical users can see the “thought process” playing out.

Key Takeaway: Regulatory requirements for transparency in AI modeling are coming (see the EU), so it is essential that transparency is front-of-mind for current and future AI products.

4. Intellectual Property

AI allows you to accomplish an extraordinarily high volume of work in very little time, whether you are developing software, writing content, or even virtually staging property for sale. Even though AI is delivering increasingly high quality results, objectively, the AI is partly riffing on the
existing content that it was trained upon. This predisposes AI-generated content to hotly contested legal exposure from potential copyright infringement.

Separate from accuracy, bias, or appropriateness, when I prompt an image generator to create a rendering or convert a photo into the style of my favorite artist, I will likely get exactly what I ask for, but the AI has no incentive or ability to credit that artist. This problem is exacerbated
when I don’t make an explicit reference (“Make this picture look like it’s from the Dutch Golden Age”). The end-user has no way to know what elements of a vast training pool were culled for generating that image, and there is no way to effectively credit the originator of the work.

This prompts a convoluted discussion of ownership. Because elements of existing work are used to generate new work, does the AI ‘own’ its creations, or are these works derivatives of the originals? We have already seen significant legal cases filed against AI providers; for example, the New York Times sued OpenAI and Microsoft at the end of 2023 for copyright infringement. It’s challenging to predict how copyright protections will shift in response to AI, but it’s imperative that adopters of generative methods are aware of their potential legal exposure as the regulatory landscape continues to evolve.

Key Takeaway: The race to deploy AI tools has begun, and the potential is truly exciting, but AIdriven regulation is inevitable, and rushing head-first into generative tasks can expose your company to unnecessary risks!

5. Privacy and Security

AI models regularly interact with sensitive data, like financial reports, health information, and criminal records. Responsible AI informs concerns about the usage of this type of data in two ways:

  1. You should know when your data is being used and what it is being used for, and
  2. You should have a reasonable expectation that your data is being protected from uncontrolled access.

Based on current policy sentiment, it appears likely that AI products will be restricted from collecting and holding data if users opt out, similar to today’s Internet and connected devices. Building your products for this likelihood will help keep your business in compliance and prevent future headaches.

An important strategy against invasive data collection is federated or device-level learning. This can minimize the potential for controlled or sensitive information traversing your system inappropriately. This issue is tied to data governance and adversarial behavior. Because many AI models are interactive and deal with highly sensitive information (e.g., PII), it is important that you work to bolster your defenses against inappropriate data access. Common approaches involve internal and external analysis of exploitation modalities.

Key Takeaway: The issue of AI responsibility is very similar to current strategies for system security. Because AI can be much more interactive and complex, security should be a priority for all companies providing AI products.

Summary

AI is growing and changing so rapidly that regulation will need to continually shift to protect citizens from malicious behaviors or negative outcomes. We have already seen first-of-its-kind legislation emerge and landmark legal cases argued. Before exposing your company to legal risk, you should partner with the AI governance and responsible AI experts at Data Surge. Preparing for these coming changes now will likely help save you from future headaches and regulatory compliance mistakes.

Leave a Reply