Nice to meet you.

Enter your email to receive our weekly G2 Tea newsletter with the hottest marketing news, trends, and expert opinions.

Ethics of Facial Recognition: Key Issues and Solutions

January 25, 2022

ethics of facial recognition

Facial recognition is considered one of the fascinating technological marvels. 

Rightly so, since it can recognize a human face from a photo, video, or in real time. Image recognition systems have come a long way from their inception to adoption in law enforcement and widespread use in consumer devices in terms of accuracy, speed, and algorithms.

Given the many controversies surrounding the ethics of facial recognition, such as identity fraud and privacy invasion concerns voiced by privacy critics and advocates, we’re greeted with the million-dollar question: Does facial recognition need an ethical reckoning to make it more equitable and impactful? 

What are the ethical issues of using facial recognition technology?

In recent years, critics questioned facial recognition systems’ accuracy and role in identity fraud. Law enforcement agencies mistakenly implicated innocent people in riots in several cases. Additionally, identity management and storage remains questionable for many, haunting privacy advocates worldwide. Seems complicated, doesn't it?

ethics of facial recognition ama journal of ethics graphic

Source: AMA Journal of Ethics

The top six ethical concerns related to facial recognition systems include racial bias and misinformation, racial discrimination in law enforcement, privacy, lack of informed consent and transparency, mass surveillance, data breaches, and inefficient legal support. Let's examine each of them in detail.

1. Racial bias due to testing inaccuracies

Racial bias remains one of facial recognition systems’ key concerns. Although facial recognition algorithms ensure classification accuracy of over 90%, these results are not universal.

Worrying developments that challenge the ethics of facial recognition have emerged time and time again in the recent past. More than half of American adults, or nearly 117 million people, have photos on law enforcement's facial recognition network. However, it’s disturbing that errors detected in the face recognition system were more common on dark-skinned faces, but fewer errors when matching light-skinned faces.

In July 2020, the National Institute of Standards and Technology (NIST) conducted independent assessments to confirm these results. It reported that facial recognition technologies for 189 algorithms showed racial bias toward women of color. NIST also concluded that even the best facial recognition algorithms studied couldn’t correctly identify a mask-wearing person nearly 50% of the time.

2. Racial discrimination in law enforcement

In a recent revelation, the United States Federal government released a report that confirmed discrimination issues in its facial recognition algorithms. Its system usually worked effectively for the faces of middle-aged white males but poorly for people of color, the elderly, women, and children. These racially-biased, error-prone algorithms can wreak havoc, including wrongful arrests, lengthy incarcerations, and even deadly police violence.

35%

of facial recognition errors happen when identifying women of color, compared to 1% for white males.

Source: G2

Law enforcement agencies like the United States Capitol Police rely on mugshot databases to identify individuals using facial recognition algorithms. This leads to a feed-forward loop, where racist policing strategies result in disproportionate and innocent arrests.

Overall, facial recognition data is imperfect. It could result in penalties for crimes not committed. For example, a slight change in camera angle or appearance, such as a new hairstyle, can lead to errors.

3. Data privacy

Privacy is one of the general public concerns, mainly due to a lack of transparency in how information is stored and managed. Facial recognition infringes on citizens' inherent right to be under constant government surveillance and keep their images without consent. 

In 2020, the European Commission banned facial recognition technology in public spaces for up to five years to make changes to their legal framework and include guidelines on privacy and ethical abuse.

Privacy concerns around facial recognition relate to unsecured data storage practices that could expose facial recognition data and other potential security threats. Most organizations continue to host their facial data on local servers, leading to security vulnerabilities and a lack of IT security professionals to ensure network security. 

Facial recognition technologies can ensure maximum data security when hosted on the cloud. However, data integrity can only be guaranteed through proper encryption. Deploying IT cybersecurity personnel is essential for proper data storage while providing consumer control to improve accountability and prevent malicious traffic.

On the brighter side, consumer products equipped with facial recognition technologies are less controversial, given the option to disable or not use the feature. However, consumer goods companies are still victims of bans due to privacy erosion. But they continue to offer facial tech-laden products by marketing them as an advanced security feature. 

The determination to go the legal route is open to devices that allow a victim to seek financial compensation for the privacy violation. For example, social media giant Facebook settled a $650 million class-action lawsuit in Illinois over collecting photos not publicly available for facial recognition.

However, privacy is still an issue for law enforcement agencies using facial recognition technology to monitor, scan, and track citizens without their knowledge for public safety and security. This has sparked numerous protests calling for stricter regulations to give citizens more control over participation and transparency around storage and governance.

4. Lack of informed consent and transparency

Privacy is an issue with any form of data mining, especially online, where most collected information is anonymized. Facial recognition algorithms work better when tested and trained on large datasets of images, ideally captured multiple times under different lighting conditions and angles.

The biggest sources of images are online sites, especially public Flickr images, uploaded under copyright licenses that allow for liberal reuse and sometimes illegitimate social media platforms. 

Scientists at Washington-based Microsoft Research amassed the world's largest dataset, MSCeleb5, containing nearly 10 million images of 100,000 people, including musicians, journalists, and academics, scraped from the internet.

In 2019, Berlin-based artist Adam Harvey’s website called MegaPixels flagged these and other datasets. Along with a technologist and programmer, Jules LaPlace, he showed that most uploaders had openly shared their photos. But they were being misused to evaluate and improve commercial surveillance products. 

5. Mass surveillance

When used alongside ubiquitous cameras and data analytics, facial recognition leads to mass surveillance that could compromise citizens' liberty and privacy rights. While facial recognition technology helps governments with law enforcement by tracking down criminals, it also compromises the fundamental privacy rights of ordinary and innocent people.

Recently, the European Commission received an open letter from 51 organizations calling for a blanket ban on all facial recognition tools for mass surveillance. In another turn of events, more than 43,000 European citizens signed a Reclaim Your Face petition calling for a ban on biometric mass surveillance practices in the EU.

The recent spate of events has challenged the ethics of facial recognition technology due to the unruly use of artificial intelligence (AI) to manipulate and threaten people, government agencies, and collective democracy.

AI and machine learning (ML) are disruptive technologies that can leverage secure facial recognition technologies. It's important to draw red lines before they're misused for identity theft and fraud.

6. Data breaches and ineffective legal support

Data breaches can raise serious privacy concerns for both the public and the government. 

While security breaches are a major concern for citizens, the development of this technology has led to advances in cybersecurity and increased use of cloud-based storage. With the added layer of security like encryption, data stored on the cloud can be protected from malicious use.

At the annual Black Hat hacker conference organized by security researchers in Las Vegas, hackers broke Apple's iPhone FaceID user authentication in just 120 seconds.

Such events increase the vulnerability of the stored data to hackers, which eventually increases the likelihood of Face ID theft in serious crimes. Face theft victims have relatively fewer legal options to pursue.

The EU General Data Protection Regulation (GDPR) doesn’t give researchers a legal basis to collect photos of people's faces for biometric research without their consent. The United States has different laws regarding using an individual's biometric information without their consent.

How to use facial recognition tools ethically

Facial recognition users can adopt the following principles proposed by the American Civil Liberties Union (ACLU) to ensure ethical use of this technology:

  • Collection: Institutions should obtain informed, written consent from citizens before including their biometric data in the facial recognition database.
  • Usage: Users should refrain from using facial recognition systems to determine an individual's skin color, race, religion, national origin, gender, age, or disability.
  • Disclosure: The results of a facial recognition system shouldn’t be traded or shared without the informed, written consent of the data subject.
  • Access: Citizens should have the right to access, edit, and delete their facial information, along with records of any changes made to the data.
  • Misuse: Organizations that host publicly available records related to an individual's identity should take proactive measures and appropriate controls to prevent their misuse from building a facial print database. Some measures include restricting automated access to sensitive databases and contractually requiring partners to adhere to ethical usage guidelines.
  • Security: Organizations should have dedicated security professionals to host, manage, and secure facial recognition information.
  • Accountability: End-users must maintain an audit trail that includes information collection, use, and disclosure details along with the date and time stamps and details of users requesting the information.
  • Government access: Organizations may grant the government access to confidential information under the Data Protection Act 1974 or upon receipt of a probable cause warrant.
  • Transparency: Organizations must define policies for compliance and use of data while offering the necessary technical measures to verify accountability.

Examples of ethical use of facial recognition technology

Facial recognition technology is at the heart of most tech companies that focus on customer safety while protecting their systems from potential security threats. Let's examine three such examples of companies using facial recognition ethically.

IBM

Tech giant IBM imposed sweeping restrictions on its facial recognition technology sales for federal regulation in the United States. In addition, IBM proposed specific recommendations to the US Department of Commerce to impose stricter restrictions on the export of facial recognition systems in some instances. 

It also pushed for precision regulation, a move to impose stricter restrictions on end uses and users that could cause significant societal harm. It also proposed six changes in how facial recognition technologies work to find matches, including:

  • Restricting facial recognition technologies that use "1-to-many" matching end-uses for mass surveillance, racial profiling, and other sensitive areas that could violate human rights
  • Limiting the export of "1-to-many" systems by controlling the export of both high-resolution cameras and algorithms used to collect and analyze data against a database
  • Imposing restrictions on certain foreign governments procuring large-scale cloud computing components for integrated facial recognition systems.
  • Restricting access to online image databases that can be used to train 1-to-many face recognition systems
  • Updating the latest human rights records from the Department of Commerce's crime-fighting groups and implementing the strictest control over the export of facial recognition technologies that support "1-to-many" matching systems
  • Finally, limiting the ability of repressive regimes to procure controlled technologies beyond US borders through mechanisms such as the Wassenaar Accords

Microsoft

Microsoft has established several principles to address the ethical issues of facial recognition systems. It has released training resources and new materials to help its customers become more aware of the ethical use of this technology. 

In addition to working closely with its customers, Microsoft is working hard to improve the technology's ability to recognize faces across a wide range of ages and skin tones. Microsoft's facial recognition technologies were recently evaluated by NIST, which reported that its algorithms were rated as the most accurate or near the most accurate in 127 tests. 

Microsoft is pushing for new laws to address transparency and third-party testing and comparison. To encourage transparency, Microsoft proposes that tech companies provide documentation and facial recognition services to delineate the technology’s capabilities and limitations. 

It also highlighted the need for legislation to hire third-party providers to independently test commercial facial recognition service providers and publish their results to address issues related to bias and discrimination.

Amazon

In 2020, Amazon imposed a one-year moratorium on law enforcement's use of its facial recognition technology “Amazon Rekognition”. Additionally, Amazon has validated its use in public safety and law enforcement scenarios to narrow down potential matches. 

Amazon has also applied for a patent to research additional authentication layers to ensure maximum security. Some of these include asking users to perform actions such as smiling, blinking, or tilting their heads.

Is facial recognition invasive?

The main problems and failures of facial recognition technology stem from the lack of advancement, diversity in datasets, and inefficient system handling. However, adopting some ethical principles can avoid making it invasive. 

Eliminate impartiality in facial recognition to prevent or minimize bias by fixing glitches in law enforcement applications, providing transparency into how artificial intelligence works internally, enforcing stakeholder accountability, monitoring with consent and prior notice, and enacting stricter legislation to avoid human rights violations. 

Facial recognition technology has infinite potential for various applications in real-world needs. However, addressing this technology’s ethical concerns is vital to make it a boon to humanity.

What to do in the event of a security incident? Handle and manage it with incident response to limit damage and save time and money.


Get this exclusive AI content editing guide.

By downloading this guide, you are also subscribing to the weekly G2 Tea newsletter to receive marketing news and trends. You can learn more about G2's privacy policy here.