blog featured image

As Generative AI becomes pervasive in modern business, we’re seeing the emergence of AI codes of conduct and ethics on major corporate websites. In most cases, these commitments refer to how the organization will use AI tools, and how user data will be protected in the process.

Ethical AI adoption is critical for sustainable business, and not just because it’s the right thing to do. Consumers are increasingly holding organizations accountable for their behavior, and there’s a great deal of backlash when they don’t behave ethically. But a major omission in many of these commitments is how humans will be impacted by AI. 

Human impact should be one of the most important considerations: we already see AI uptake resulting in job losses and potential human rights abuses such as AI ‘sweat shops’, where workers training and moderating AI content are exploited and underpaid.  

There are also warning signs that overreliance on AI may be impacting human cognitive abilities: for example, an MIT study found that relying too much on Generative AI could negatively affect learning. 

Despite the impressive capabilities of AI, humans have a crucial role to play in the AI era. You still need humans to edit, vet and fact check AI outputs, you need humans available for handling escalations AI can’t manage, and you must have humans in place to oversee and take responsibility for AI actions.

Countering AI job erosion

As AI gains traction, organizations must acknowledge the impact AI has on jobs.

Overlooking the importance of humans in the workplace could erode job quality, displace workers, and deepen inequality. 

We believe AI adoption should reinforce, not weaken, job security, skills development, and trust across organizations. By applying a ‘humans in the loop’ principle, organizations should champion the augmentation of humans with AI, as opposed to the replacement of humans with AI. Keeping humans in the loop gives organizations the benefit of sustainable, human-centric connections with their employees, stakeholders, and customers, and mitigates the risk of losing customers due to unethical AI adoption. 

Because of the pace at which AI is impacting the current and future workforce, organizations must make sure their current human workforces are upskilled to manage, teach, facilitate, and leverage AI as a way of augmenting their current workflows and in doing this ensuring they are not being left behind. 

Organizations need to focus on developing career resilience, which is the thoughtful and forward-looking assessment and mitigation of the impact of AI on specific jobs, particularly entry-level roles which are most at risk. It is crucial to invest in the upskilling of these individuals to reduce the risk of displacement.

Some of the skill sets that should be developed include building staff members’ confidence, and ensuring they are adept at using a variety of generative AI tools. Importantly, they should be coached to discern whether the output of these tools is accurate. One key skill set is understanding how generative AI works and what tools are available, but an important second layer is making sure humans don’t outsource their thinking to AI. When AI is introduced, there is a risk of over-dependence on its outputs: for example, copying and pasting AI analysis without incorporating the team’s historical experience to make a stronger, more informed decision.

In addition to keeping humans in the loop, any efficiency gains from AI-driven growth should be shared: whether that is achieved through career progression for team members or increased wages for higher value roles. 

Beyond jobs: other ethical considerations

Ethical, accountable conduct doesn’t just apply to how staff within the organization are impacted by AI, but also to humans throughout the AI value chain. To us, accountability means recognizing that the impact of AI must always uphold the welfare and dignity of the team members leveraging this technology. 

Inclusion is also a crucial consideration. This means things like confirming AI is locally relevant and unbiased. For example, some BPOs and outsourcing companies use accent-neutralizing tools for AI to cut time on training and make it easier for team members to communicate with callers. We’ve not had to use this, nor has this been requested by our clients. Since we are a global company, our team members have a variety of accents, and we believe this raises an ethical discussion around the ethics and cultural nuances of accent correcting software.

As AI becomes ubiquitous, it is vital that all organizations and AI users keep ethics and accountability top of mind.

Boldr is committed to ethical AI adoption and keeping humans in the loop. Humans are at the heart of our AI Manifesto, which focuses on seven key principles: accountability, transparency, inclusion, enablement, resilience, redistribution, and protection. Read Boldr’s AI Manifesto here.   

General referencing on this subject: 

 Glo Guevarra is the Global Head of Impact and Content at Boldr and she holds a postgraduate degree in Labor, Activism, and Development from SOAS University of London.