A long time ago, I wrote
“I’m curious about how augmenting human ability and innovating technology for communication, can help us, collectively, become “educated”.”
the Future of Communication Part 1: The Session
Nowadays, I look at the world and think “Oops.” I remind myself that being alive has always been sort of complicated.
I recently watched a video from the Center for Humane Technology about the advancement of AI. Then I shared it with my network. It freaked me out, but made me feel like we, the technologists, have a responsibility to be clear-eyed. It’s not like this is a new impulse for me, the desire to ensure that our humanity doesn’t get lost. Indeed, I’ve been hating on corporate tech for a good long while. And with good reason.
Doug said that he found the Center for Humane Technology’s video a bit “scaremongery”. Indeed, I know some people who have been freaked out for the last few weeks. Meanwhile, I remain in high spirits about the whole AI thing.
Despite knowing the answer…
A scaremonger is someone who spreads alarming or exaggerated information or rumors, especially in order to frighten people into taking a particular action or to influence their opinions or behavior. Scaremongers often use fear tactics to manipulate people’s emotions and opinions, and they may exploit people’s anxieties or insecurities to achieve their goals. Scaremongering can be seen in a variety of contexts, including politics, advertising, and media, and it can be used to create a sense of urgency or to promote a particular agenda.
ChatGPT defines scaremonger
Is the Center for Humane Technology spreading alarm? Yes, yes they are. But what about the “exaggerated information or rumours”? There I’m not so sure.
In any case, the opposite of scaremongering is something like “calm” or “reassurance”. I thought I would pen a boring, calming post about three AI regulatory models. AI didn’t happen overnight. Calm, ethical, techie nerds are advising and advocating and chatting and talking about this stuff. It’s all going to be ok.
We don’t need no stinking rules
The first AI regulatory model is a light-touch model that you would expect from the corporate fuckaroos. It’s a minimalist approach to “spur innovation” that focuses on voluntary guidelines and principles rather than enforceable laws. This approach is currently used by the UK and USA, surprise. Under this model, the corporations, companies and developers are “encouraged” to follow best practices and ethical standards for AI development. There are few mandatory rules or regulations. What does this mean? Basically, the public is left to fend for themselves and figure out if something is going to cost them, I don’t know, their lifesavings, left kidney or their sanity.
Both the UK and the USA have developed some guidelines/principles that encourage developers to design AI systems that are ethical or that emphasize industry standards. The UK established Centre for Data Ethics and Innovation (CDEI) in the UK has made recommendations on AI bias and governance, but they can’t enforce this stuff.
You know how smoking is no longer allowed in restaurants or on airplanes and how cars are built with annoying alarms that remind us to buckle our seat belts? Yeah, that’s public health and safety regulation. The science shows less people dead, neat eh?
Let’s figure out how to bring some balance into this
The second model for regulating AI tries to be more balanced and comprehensive. It combines voluntary guidelines with enforceable laws. The EU has been doing this in the tech space for a while. The most well known example is the EU’s General Data Protection Regulation (GDPR), which includes some provisions for AI systems that process personal data. The GDPR requires companies to obtain explicit consent from individuals before using their personal data for AI purposes. They are also required to provide transparency about how AI algorithms make decisions.
While not perfect, regulations like GDPR try to strike a balance between protecting individual privacy rights and allowing companies to collect and use personal data for stuff like marketing or research. The GDPR has been hailed as a model for data protection regulation worldwide, and has inspired similar legislation in other countries.
We’ll tell you what to do
The third AI regulatory model is what we’re seeing in China at the moment. It’s an approach that provides wide reaching and strict rules and requirements around AI development and use. China has a couple of various government agencies playing central roles in regulating AI, and if you read into it, you’ll learn about how people are “jostling” to look smart in front of the powers that be. Le sigh.
At the moment, Chinese companies need to obtain government approval before deploying AI systems and “Providers will be responsible for the legitimacy of data used to train generative AI…”. Another example of heavy-handed AI regulation? Well, Russia, of course. The obvious criticism is that countries that have heavy-handed regulatory policy will use AI for political purposes, surveillance and censorship.
Are you calm?
Policy calms me down, mostly because it’s very long and boring papers written in elitist language that makes me sleepy. Still, I’m glad other people find it interesting and exciting.
It’s easy to look at AI and be like “OMFG WE’RE ALL GONNA DIE!”. I admit, AI feels different, but remember NFTs and cryptocurrency and blockchain blockchain blockchain! Tech hypes always promise to change the fabric of society. Tech bros always jump on the latest thing and make a big deal about how this tech is going to change everything. CEOs always demand their entire staff start thinking about how to implement said tech hype into current products.
Yet here we are.
So although the last couple months of AI innovation feels fast and scary, remember – it didn’t happen overnight. There are loads of people quietly continuing to do the work that surrounds technology. The policy work, the ethics work, the clear-eyed, responsible work of helping society deal with technological progression, whatever that progression might look like.
Great article. AI hype is incredible and I get the sense that the media has found a shiny ‘new’ object to put through the fear for clicks model that has been refined over decades.
I don’t know if AI will be revolutionary in the same ways that the Segway was meant to revolutionise transport. It will be transformative, I think it’ll be more like a smartphone, something that didn’t exist 15 years ago but has now become an extension of us.
-no chatgpt was used in writing this post…perhaps it should have been.
I love this, and I miss your direct, no-nonsense and colourful language to make a point. Howya doin? All good I hope?