The enormous focus on AI during the Websummit 2023 tech conference in Lisbon was predictable, but still a bit surprising. At a guess, I’d say over 60% of the talks had AI in the title, and 90% of the ones I saw mentioned it at least once.

Some of the uses for AI that were floated at the conference:

  • Automating things like sports scores so journalists have more time to focus on their stories
  • Using it to sift through vast amounts of e.g. climate data, freeing up the sustainability workforce to focus on implementation
  • Measuring and predicting wildfire spread so firefighters can respond more effectively
  • Beyond just generating images, we were shown how AI can generate entire design files, complete with layers, editable text and adjustable colours

Exciting right? However, I arrived bright eyed and looking forward to hearing from speakers about how we might approach the puzzle of regulating something we don’t fully understand yet. So it was pretty disappointing that all the talks from industry leaders about regulating this technology essentially just confirmed that…we don’t fully understand AI yet and we really should figure out some ways of regulating it.

(I did hear one halfhearted comment about how we needed to consider those who’ll be most affected by automation – women in low-paid jobs – before the speaker was bustled off the stage in time for the next talk, which, as someone who attended the conference on a Women in Tech ticket, wasn’t exactly encouraging.)

And look, I get it. Nobody wants to think about the boring logistics of actual robots. Robots are cool, and the tech sector is predictable. In our bubble of govtech and tech for good, it’s easy to forget that we’re actually a very small corner of the market, with not a lot of money in it. At an event usually sponsored by the likes of Meta, of course people were more excited about the shortcuts AI can offer than those it should.

It’s always been this way though – Silicon Valley likes to move fast and break things, and what springs out of that is a small but incredibly dedicated group of people committed to making the new world order as accessible and ethical as possible. It’s usually people working for non-profits, or for/with governments, or panels of volunteers like the W3C. One organisation at the WebSummit, the International Rescue Committee, is developing software to help refugees access vital information. The stakes for any data that might be processed by an AI couldn’t be higher: if, for example, a smuggler gets hold of it, they could hunt a person down, figure out where their relatives are, or even enslave them. Signpost, an offshoot of the IRC, is pre-deployment testing an AI model to integrate with this software and will publish its findings.

So, here’s what I know: AI will be (is already) incredibly disruptive and the ethics behind its use will be questionable at best in the majority of cases.

But the tech for good community will continue to do what it always does: diligently pushing for action on data sovereignty and accountability, ensuring these new technologies can be developed accessibly, and creating versions of them that serve citizens without putting their privacy and wellbeing at risk. From Delib’s perspective, in the immediate term that means investigating third-party services and keeping up with market demands without compromising on the rigorous work we do to keep our products safe, secure and compliant.

Read our free guide to using AI in your work