...
Technology

Women in AI: Rachel Coldicutt researches how technology impacts society


To give women academics and others focused on AI their well-deserved — and owed — time in the spotlight, TechCrunch published a series of interviews focused on notable women who contributed to the AI ​​revolution. We’ll be publishing these articles throughout the year as the AI ​​boom continues, highlighting important work that often goes unnoticed. Read more profiles here.

Featured today: Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society. Customers include Salesforce and the Royal Academy of Engineering. Prior to Careful Industries, Coldicutt was CEO of the think tank Doteveryone, which also conducted research into how technology was impacting society.

Before Doteveryone, she spent decades working on digital strategy for companies including the BBC and the Royal Opera House. She attended Cambridge University and received an OBE (Order of the British Empire) honor for her work in digital technology.

Briefly, how did you get started in AI? What drew you to the area?

I started working in technology in the mid-90s. My first proper technical job was at Microsoft Encarta in 1997, and before that I helped build content databases for reference books and dictionaries. Over the past three decades, I’ve worked with all kinds of new and emerging technologies, so it’s difficult to pinpoint the exact moment I “got into AI,” because I’ve been using automated processes and data to guide decisions, create experiences, and produce masterpieces. art since the 2000s. Instead, I think the question is probably, “When did AI become the set of technologies everyone wanted to talk about?” and I think the answer is probably around 2014 when DeepMind was acquired by Google – that was the moment in the UK when AI overtook everything else, even though many of the underlying technologies that we now call “AI” were things that were already quite in force. common use.

I started working in technology almost by accident in the 1990s, and what has kept me in the field through many changes is the fact that it is full of fascinating contradictions: I love how empowering it can be to learn new skills and do things, I’m fascinated by that we can discover from structured data and I could spend the rest of my life observing and understanding how people create and shape the technologies we use.

What work are you most proud of in the field of AI?

Much of my work in AI has been in policymaking and social impact assessments, working with government departments, charities and all types of businesses to help them use AI and related technology in an intentional and trustworthy way.

In the 2010s, I ran Doteveryone – a responsible technology think tank – which helped change the way UK policymakers think about emerging technology. Our work has made clear that AI is not an inconsequential set of technologies, but something that has pervasive real-world implications for people and societies. In particular, I’m very proud of the free Consequence Check tool we developed, which is now used by teams and companies around the world, helping them anticipate the social, environmental and political impacts of the choices they make when shipping new products and resources.

Most recently, the 2023 AI and Society Forum was another proud moment. In the run-up to the industry-dominated UK government AI Safety Forum, my team at Care Trouble quickly convened and organized a meeting of 150 people from across civil society to collectively make the case that it is possible to make AI work for 8 billion people, not just 8 billionaires.

How do you face the challenges of the male-dominated technology industry and, by extension, the male-dominated AI industry?

As a veteran in the tech world, I feel like some of the gains we’ve made in gender representation in tech have been lost in the last five years. A Turing Institute study shows that less than 1% of investment made in the AI ​​sector has been in female-led startups, while women still represent just a quarter of the global technology workforce. When I go to AI conferences and events, the mix of genres – especially in terms of who gets a platform to share their work – reminds me of the early 2000s, which I find very sad and shocking.

I am able to navigate the sexist attitudes of the tech industry because I have the enormous privilege of being able to found and run my own organization: I spent much of my early career experiencing sexism and sexual harassment on a daily basis – dealing with this gets in the way of achieving a great work and is an unnecessary entry cost for many women. Instead, I’ve prioritized creating a feminist business where we collectively fight for equity in everything we do, and my hope is that we can show that other ways are possible.

What advice would you give to women who want to enter the AI ​​field?

Don’t feel like you need to work in the area of ​​“women’s issues”, don’t get carried away by the hype and seek out colleagues and build friendships with others to have an active support network. What has kept me going all these years is my network of friends, former colleagues and allies – we offer mutual support, a never-ending supply of pep talks and sometimes a shoulder to cry on. Without it, it can feel very lonely; You’ll often be the only woman in the room so it’s vital to have a safe place to decompress.

As soon as you get the opportunity, hire well. Don’t replicate structures you’ve seen or entrench the expectations and norms of an elitist, sexist industry. Challenge the status quo every time you hire and support your new hires. That way, you can start building a new normal, wherever you are.

And look for the work of some of the great women pioneers in AI research and practice: start by reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, who have produced fundamental research that has shaped our understanding of how AI changes and interacts with the society.

What are some of the most pressing questions facing AI as it evolves?

AI is an enhancer. It may seem like some of the uses are inevitable, but as societies we need to be empowered to make clear choices about what is worth scaling up. Right now, the main effect that the increased use of AI is having is to increase the power and bank balances of a relatively small number of male CEOs, and it seems unlikely that [it] it is shaping a world that many people want to live in. I would love to see more people, especially in industry and policymaking, engaged in the questions of what a more democratic and responsible AI would look like and whether it is even possible.

The climate impacts of AI — the use of water, energy and essential minerals — and the health and social justice impacts for people and communities affected by the exploitation of natural resources must be at the top of the list for responsible development. The fact that LLMs in particular consume so much energy shows that the current model is not fit for purpose; By 2024, we need innovation that protects and restores the natural world, and extractive models and ways of working need to be phased out.

We also need to be realistic about the impacts of surveillance in a more datafied society and the fact that — in an increasingly volatile world — any general-purpose technologies will likely be used for unimaginable horrors in war. Everyone working on AI needs to be realistic about technological R&D’s long-standing and historical association with military development; We need to champion, support and demand innovation that starts and is governed by communities, so that we can achieve results that strengthen society and do not lead to increased destruction.

What are some issues AI users should be aware of?

In addition to the environmental and economic extraction that is built into many of today’s AI technology and business models, it’s really important to think about the everyday impacts of increased use of AI and what that means for everyday human interactions.

While some of the issues that have made headlines have to do with more existential risks, it’s worth keeping an eye on how the technologies you use are helping and hindering you on a daily basis: which automations you can turn off and bypass, which ones deliver real benefit, and where can you vote with your feet as a consumer to make the case that you actually want to continue talking to a real person, not a bot? We don’t need to settle for low-quality automation and we must come together to demand better results!

What’s the best way to build AI responsibly?

Responsible AI starts with good strategic choices — instead of just throwing an algorithm and hoping for the best, you can be intentional about what to automate and how. I’ve been talking about the idea of ​​“Enough Internet” for a few years now, and it seems like a really useful idea to guide how we think about building any new technology. Instead of pushing boundaries all the time, can we build AI in a way that maximizes benefits for people and the planet and minimizes harm?

We have developed a robust process for this at Careful Trouble, where we work with boards and senior teams, starting with mapping out how AI can and cannot support your vision and values; understand where problems are too complex and variable to be improved by automation and where it will create benefits; and, finally, develop an active risk management framework. Responsible development is not a one-time application of a set of principles, but an ongoing process of monitoring and mitigation. Continuous deployment and social adaptation mean that quality assurance cannot be something that ends when a product ships; As AI developers, we need to develop iterative social sensing capability and treat responsible development and deployment as a living process.

How can investors better promote responsible AI?

Making more patient investments, supporting more diverse founders and teams, and not seeking exponential returns.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.