Keeping AI Ethical, Accessible and Responsible
Scroll
by Nikki Stefanoff
In an era where technology permeates every aspect of our lives, the need for artificial intelligence (AI) to be designed, built and used in an ethical manner becomes more apparent.
With half the world heading to elections this year and the rise in Deep Fake videos and images blurring the line between good and bad actors, AI now has the potential to shake the global foundations of democracy. Humanity is finding itself in quite the technological pickle.
How do we as a collective forge ahead into this brave new world without losing our grip on what’s real and what isn’t? How do we keep Artificial Intelligence ethical, responsible and built with everyone’s health, safety and lived experience in mind?
What do we mean when we say ethical AI?
The current focus on ethical AI is mostly around generative AI like ChatGPT — large language models (LLMs) which can scan a mind-blowing amounts of data in record time to give you the answers you need.
Over the last couple of years news headlines have screamed about how AI is here to take our jobs and help our kids cheat at school. And while there’s an element of truth in there — SAG-AFTRA members weren’t striking over nothing — AI also has the potential to make our lives much more efficient. And we’ve got to admit, it’s not going anywhere anytime soon.
A recent article in the Harvard Business Review reported that 67% of senior IT leaders were prioritising generative AI for their business, with one-third (33%) naming it as a top priority.
While they’re right to make it a priority, there needs to be education and guidelines built into how they use it. If not deployed mindfully, generative AI can do more harm than good to an organisation.
For example, if AI generated content is inaccurate, inaccessible or has the potential to cause harm to users/customers organisations could find themselves wading through a quagmire of legal and financial woes.
It’s this need for a framework and guidelines around using AI ethically that prompted the United Nations Educational, Scientific and Cultural Organisation (UNESCO) to release Recommendation on the Ethics of Artificial Intelligence.
The 2021 report approaches AI ethics by using human-dignity, well-being and the prevention of harm as its compass and offers guidance on how to put it into practice. UNESCO’s recommendations have since gone on to be adopted by 193 member states.
Ethical and responsible AI should be designed with the human at the centre
With so much focus on AI in an organsiational setting, Deque, a US-based digital accessibility organisation, recently ran its second annual conference with a line-up of speakers experienced in ethical and responsible AI.
Dr Rumman Chowdury was a keynote speaker and gave a presentation grounded in personal experience. Chowdury once led the Responsible AI practice for consulting firm Accenture and was the Director of Machine Learning Ethics Transparency and Accountability at X (formerly Twitter). She now runs Humane Intelligence, an organisation focused on safety and ethics when building generative AI products.
“Whenever I see headlines [about AI saving us] I always think of [data journalist and author] Meredith Broussard who coined the word ‘technochauvinism’, which is the belief that technology is always the solution,” Chowdury said in her opening gambit.
“It’s the idea that human beings are flawed and technology will save us, that there’s some sort of ideal human that we should be like and act like. The normal functions of human beings that we all share — sleeping, eating — are considered to be inefficient [making] robots better than us.”
According to Chowdury, the problem with thinking AI will solve all our problems is that AI models often rely on data that hasn’t taken into account human diversity and individual needs. These models tend to be trained on the wants and needs of the ‘average human’ – an ideological person who, quite frankly, doesn’t exist.
This ends up perpetuating bias in AI and inaccuracies in information that disproportionately affect marginalised communities, including those living with disability.
It’s why, when designing and deploying ethical AI systems, inclusivity and understanding the diverse needs of all individuals should be prioritised. Much like human-centred design, human-centred AI is needed to ensure no one’s needs get left behind.
Which comes first, the human or the machine?
Chowdury has coined the phrase ‘retrofit human’ to describe those responsible for building AI models and trying to adjust human beings to the limitations of an AI system, rather than adjusting the technology to better serve humanity.
The human plays a large role in Chowdury’s view of responsible and ethical AI and she’s not alone in her thinking. One of the fundamental principles of responsible technology is recognising the role of humans in AI systems, and making sure there is always a human in the loop.
This concept of having a ‘human in the loop’ is usually told alongside the story of Stanislav Petrov. In 1983, Petrov’s decision to question an alert from a nuclear early warning system prevented a potential nuclear war, highlighting the importance of integrating human oversight into technological systems.
How can organisations make sure they’re using ethical AI?
For those looking at what the future of ethical AI looks like, accountability, inclusivity and transparency are all emerging as cornerstone principles.
Google’s seven-point ethical AI framework encapsulates these principles and offers guidance in making sure AI models continue to help all of society by staying fair and ethical and leaving no human’s experience behind in the process.