The Ethics of AI: 4 Essential Questions We Should Ask

July 31, 2023

ethics of AI

A year ago, if I’d said “AI" at my dinner table, my (mature) family wouldn’t have known what I was talking about. Except for the kids, of course. The kids already know everything. 

Recent widespread access to consumer-facing generative artificial intelligence tools has sparked global conversations from robot takeovers to the excitement of time-saving tasks being taken off our full plates at work.

Subject matter experts worldwide have been doubling down on creating machine learning resources for the masses, while policymakers consider regulatory steps to provide guardrails as bad actors have a field day stress-testing our current systems. 

At the same time, we’ve developed technology policies that struggle to keep pace with the speed of innovation, populations who can’t effectively determine fact from fiction online, and privacy being blatantly ignored by some of the same institutions that tout its necessity. 

"In short, artificial intelligence is now a player in the shaping of knowledge, communication, and power."

Kate Crawford
Atlas of AI

Answering four main questions surrounding artificial intelligence

How might we gain input on what direction we nurture AI’s impact? How might we be proactively mitigating harm caused by AI? As individuals, corporations, and lawmakers, how might we minimize the risk of opening a can of machine learning worms?

It starts with ethics -  with each one of us, as individuals, making ethical decisions.

We are innovators. We are workers. We are families. We are communities. We are businesses. We are nations. We are a global humankind. We are building, feeding, and teaching the machines and therefore have 100% input on their output. 

AI will affect every one of us on this planet and every one of us has a stake and a voice in how it is – and isn’t – allowed into our lives.

We learn from our mistakes in life and business and AI is no different. Learning is the very foundation of the nature of AI. It is, after all, called machine learning. How we build it determines what it puts out. So where do ethics apply here? 

Ethical tenets must be implemented in the four major stages of the entire AI lifecycle: 

  • How we build it
  • What we put into it
  • What we do with the output
  • How we mitigate unintended and inevitable consequences

Omitting that final step in the lifecycle is – you guessed it – unethical.

These stages may seem perfectly reasonable milestones with which to assign rules and guidelines. We’ve been living alongside machine learning algorithms since the 1950s. We’re several years into drafting global data and AI ethical standards. And yet, we’re far from agreement and even further from adoption.

If we look at some current legal hurdles for big tech, it’s clear that those responsible for making decisions at each stage of AIs lifecycle aren’t seriously taking ethical considerations into account.  

Ethical questions surrounding AI

So how do we insist upon ethical practices by those involved at each stage of the AI lifecycle?

We ask questions, we ask more questions, then we ask those same questions again, and we never stop asking the questions.

 

  1. Who are the decision-makers at each stage? We need answers to this to mitigate bias, ensure best practices, and include diversity of thought.
  2. Who are the decisions being made and optimized for? Once again, this reduces bias, but more importantly, it makes sure the impact on all parties is evaluated before moving forward. 
  3. What capital is required to fuel AI at scale? This is needed to make logical, long-term benefit-cost analyses.
  4. What are the social, political, and economic impacts? Cause-and-effect understanding is necessary to continuously correct the guidelines over time. (I like to think of this step as being aligned with agile product development: launch, learn, repeat.)

How AI impacts labor and the economy

Three recent case studies from Stanford, MIT, and Microsoft Research found similar results in employee productivity growth from generative AI tools compared to their counterparts who did not use tooling to accomplish their tasks. 

Across varying disciplines (customer support, software engineering, and business documents creation), we see in empirical data that business users increased their throughput by an average of 66%. In the best of scenarios, that saves time on cognitively demanding tasks, creating the conditions for more personalized human touches, imagination, and polished deliverables.  

With increased productivity at scale, fears run that some jobs will eventually become obsolete. Historically, an industry has a natural lifecycle when new innovations hit the labor markets. For example, ever wondered what happened to telephone operators?

No one has a magical switch that allows under-skilled or under-qualified workers to enter into industries requiring more advanced skills immediately. There lies a skills gap that historically relies upon and exhausts social safety nets. These skill gaps take time to identify, fund, and fill. Even while some countries proactively support upleveling skills for their workers, data shows the most vulnerable segments of our global population tend to be disproportionately affected during these innovative heydays. 

While economic forecasts strongly indicate positive labor market impacts from generative AI uses in business, do we fully know what is at risk from this economic boom? 

Creatives such as artists, musicians, filmmakers, and writers are among the industries with several class action lawsuits against OpenAI and Facebook’s parent company Meta. The big-tech companies that benefit from AI refute claims that the artists’ copyright-protected work has been unlawfully used to train AI models. Artists are deleting online accounts in droves and high-profile creative companies like Getty Images are filing lawsuits. In response, the FTC recently investigated OpenAI’s online data scraping practices. 

This is a perfect example of the four stages of AI’s lifecycle. Let’s ask our ethical questions: 

 

  1. Who made these decisions? Not the creatives.
  2. Who were the decisions optimized for? Not the creatives. 
  3. What was the capital cost? Human capital? Financial capital? Natural capital? Perhaps it was across all three  at the expense of the creatives. 
  4. Was there consideration of social, political, and economic impacts? Perhaps, but by whom? Not the creatives.

Are we willing to risk a generation of creatives and their adjacent industries withholding work from being published online? How will that impact our creative cultural evolution, the creators’ livelihoods, and the long-term social and political impact it would have? Did someone think through this potential impact, determine whether legal and reputational risks were justified, and decide to move forward? 

Maybe. Or they simply didn’t think it through at all. In both instances, the decision was unethical, regardless of their interpretation of the legal implications.

As a global economy, it’s critical to identify organizations operating within ethical practices to prioritize their support above those infringing upon ethical standards. By not surfacing the ethical posture of the decision-makers, we chance inadvertently looking the other way precisely at the moment we need widespread scrutiny. 

Takeaway question: How might we gauge, measure, or identify a company's ethical posture?

 

Let us know here.

How AI makes an environmental impact

AI is an energy-intensive infrastructure. Environmental impact is largely out-of-sight and out-of-mind, and is often an afterthought in a space like the tech sector.

The MIT Technology Review reported that training a single AI model can emit as much carbon as five cars, the equivalent of more than 626,000 pounds of carbon dioxide. Earth minerals also play a large part in what fuels the energy for generative AI’s mass computational processing. Mining for the necessary metals involved in the physical infrastructure of computation often comes at the expense of local and geopolitical violence.

“Without the minerals from these locations, contemporary computation simply does not work."

Kate Crawford
Atlas of AI

Remember our third ethical question: What capital is required to fuel AI at scale? To make logical long-term benefit cost analysis. Natural capital in the form of impact on our planet should not be left out of the equation if we’re brave enough to ask the right questions.

Asking the right questions can be scary, especially if the questions implicate your own livelihood as a source of contention. But in the interest of knowledge is power, technologists must embrace transparency to ultimately participate in any ethical technology solutions. 

It’s not corporate sabotage! A group of machine learning practitioners “who are also aware of the overall state of the environment” committed themselves to building support tools to assess the carbon emissions generated by their work. After assessment, they can compute ways to reduce those emissions. They even made this Emissions Calculator so other AI practitioners can calculate estimates. 

Takeaway question: How might we encourage technologists and providers to be brave in their AI transparency?

 

Let us know here.

How ROI-yielding frameworks affect AI ethics

Regulation alone cannot solve our AI woes. Technologists are often motivated by metrics that, to them, can seem ethically agnostic because they’re not regulated, but they do yield a return on their investment. What are these ROI-yielding frameworks? Where do we see these rule sets in the wild that return some form of reward to the rule-following company? 

Let’s consider The Google PageRank algorithm as an example of a non-regulatory impact on technology ethics. The Google PageRank algorithm analyzes a “variety of signals that align with overall page experience.” This includes elements that align with UX best practices, following ADA guidelines and privacy policies. 

No dark-web patterns will mean favorable ranking. Not being ADA compliant will mean less-favorable rankings. By improving a site’s presence and following Google’s guidelines, we see ethical decisions being made inadvertently, based on adherence to a non-regulatory set of rules.  

Why should your company’s site follow suggested best practices from this other company’s algorithm? Because doing so locks in your best chances at ranking well on Google. Impact on a company’s discoverability and perceived importance online, which affects their bottom line, is a motivator, and thus influences ethical practices without regulatory enforcement.  

Takeaway question: How might we hold our technologists accountable for their ethical practices outside the traditional regulatory space? What do they find value in? Where do they derive fuel for their success?  

 

Let us know here.

It begins with us

No matter who you are, everyone plays a role in minimizing the risks that go hand-in-hand with using artificial intelligence and machine learning tools unethically. As individuals, it’s crucial we make ethical decisions around using AI and how – and what – we teach these machines about society.

The story of AI is just getting started and how it will fully transform the future is a story that isn’t written… yet. Thankfully, we have a say in how AI evolves both in our personal and professional lives. It all comes down to making sure ethics are top of mind. 


G2 wants to hear from you!

Those interested in AI ethics, please share your thoughts on what is missing in this conversation that is most important to you, your industry, company, or livelihood. I plan to continue developing this conversation and sharing subsequent articles based on the insights and learning from you and the rest of the G2 community. 

Want more thought leadership? This article is part of the G2 Voices series that features a variety of influential G2 leaders.

artificial intelligence software
The more you know!

Keep ethics in mind as you find the right artificial intelligence software for your needs.

artificial intelligence software
The more you know!

Keep ethics in mind as you find the right artificial intelligence software for your needs.

The Ethics of AI: 4 Essential Questions We Should Ask With artificial intelligence so prominent in society, it's crucial we have these four questions in mind to keep the generative AI lifecycle ethical. https://learn.g2.com/hubfs/G2CM_FI654_Learn_Article_Images_%5BG2_Voices_AI_Ethics%5D_V1b%20%281%29.png
Lacy Coligan Lacy is the former Director of Product Experience Design at G2. She is dedicated to creating a world where the positive goals of business, government, the arts, and the people are achieved more effectively and easily through optimized technology. She holds a Masters degree in Public Policy from University of Chicago and is currently studying Technology and Business Ethics with Cornell University. When she’s not creating online experiences, Lacy can be found nurturing her community’s arts and education scene, launching new Airbnbs, or enjoying music, games, and traveling with her three children. https://learn.g2.com/hubfs/Lacy_Coligan_headshot.jpeg

Never miss a post.

Subscribe to keep your fingers on the tech pulse.

By submitting this form, you are agreeing to receive marketing communications from G2.