Specialists in oral translation services

Tuesday, December 31, 2024

The Year in AI, Wrapped

By Michelle Hof, AIB

As 2024 draws to a close, a lot of “best of” lists are being shared: top 100 books of the year, 10 best movies to watch with family over the holidays, top 5 artists on your Spotify feed, and so on. 

Here are some of the lessons I have learned this year as a member of the Science Hub AI workstream about artificial intelligence as it applies to our community of professionals. These are largely inspired by what I saw at the event “The Impact of AI on Interpreting” organized by AIIC Switzerland in Geneva earlier this autumn. Here we go:

1) AI is drawing our attention like never before – while it may have felt a niche topic only a few years ago, AI is now on all interpreters’ minds. The AIIC Switzerland event showed this, with over 100 attendees making the trip to Geneva, some from outside Europe. This interest is also reflected in the number of followers that the AI workstream’s LinkedIn page has gained in the few months since it was launched (over 516 at last count).

2) The AI landscape is constantly shifting – Developments in AI are being rolled out almost daily, it seems, with groundbreaking new capabilities being announced in quick succession. AIIC Switzerland’s event coincided with the rollout of the updated large language model OpenAI o1, but since then there have been even more eye-popping AI-powered tools unveiled, like OpenAI’s recent announcement of its new Sora video generator, or Google’s AI agent-focused Gemini 2.0, with its “everything app” prototype Project Astra. With so much going on, it can feel hard to stay on top of things.

3) Interpreters’ views on AI are increasingly nuanced – The good news is, as a community we interpreters seemed to have moved on from what I’ll call “freak out” mode into a more productive space, where we are asking informed questions and examining how we can engage with technological developments. This was clearly seen at the AIIC Switzerland event, where both speakers and attendees showed a more sophisticated understanding of AI and were not afraid to tackle the tough questions about what AI means for us as a profession and what we need to know going forward.

4) We can benefit from existing expertise on AI – Speaking of what we need to know about AI, in Geneva we heard from a broad range of speakers, each offering a different answer to that question. It was clear that AIIC Switzerland wanted not only to hear from interpreting colleagues in the know, but also to engage with tech experts from outside our community. Panelists included experts in everything from computer-assisted interpreting tools and cognitive load to digital law, media design and ethics. 

5) More interpreters are using AI-powered tools than you might think – In one talk, Paris-based AIIC member and interpreter trainer Andrew Gillies showed us who is already embracing AI-powered tools. Over a series of slides, Gillies shared the results of a snap survey he sent out to colleagues (the full write-up with all the figures can be found here). While Gillies’ convenience sample of 400 or so respondents, mainly based in Europe, may not be fully representative of our global industry, the numbers still paint a vivid picture: as many as 80% of respondents working on the private market reported using some form of AI in their work, whether for translation (80%), glossary creation (45%), term extraction (35%), document preparation (30%) or transcription (20%). The numbers are somewhat lower for colleagues in the institutional market, where just over half reported using these tools in their work. Gillies pointed out that use varied widely depending on the type of tool in question (e.g. DeepL use was reported by 80% of respondents, automated boothmates by only 8%). But I think most of us in Geneva were amazed to see just how many interpreters are already incorporating these tools into our work in some way.

6) A host of AI-powered tools are already available for interpreters to use – In Geneva we also heard from Prof. Dr. Anja Rütten of the TH Köln, a self-professed techie and a fellow member of AIIC’s AI workstream, about a number of the tools that we can tap into to improve our workflow. On one of her slides, she listed over two dozen different AI tools for speech recognition, live prompting, multi-source searching, abstracting and more. Rütten runs a blog where she reviews and explains a lot of these applications, so I will spare you the details here, but her presentation left us all with the clear impression that whatever needs doing, there seems to be an AI tool trying to help you do it.

7) Some of the interpreters using AI do not consider confidentiality – Of those interpreters who do not use AI tools, it is thought that one barrier to use may be concerns about confidentiality (as Gillies showed us, many institutions have rules in place limiting AI use for this reason). Of those interpreters who do use AI, not all take steps to ensure confidentiality for their clients’ material. Here, some colleagues reported that they did this by only using materials in the public domain (60%), while others said they addressed confidentiality concerns by taking out subscriptions (50%), asking their clients for permission to use AI tools (15%), and/or using tools with ISO security certificates (15%). However, about 10% of AI users reported that they did not take any steps at all to ensure confidentiality. In a profession such as ours, that figure should be zero.

8) It is not only confidentiality that we need to be thinking of – As we learned from invited guest Yaniv Benhamou, Professor of Digital Law, Copyright and Information Law at the University of Geneva, there are at least three different interfaces where AI and intellectual property (IP) meet in our work, and we need to understand the nuances of each. In addition to the issues thrown up when confidential documents are fed into AI-powered tools, there is also the threat of copyrighted works being used to train AI models – which we might face, for instance, if interpreters’ interpretations and voices were to be scraped from the internet to train AI-powered speech translation tools. Also, there is the question of privacy violations when user data is fed into AI models. These three concerns – confidentiality, copyright and data protection – were addressed at the very interesting PriMS event on IP rights held on 28 September. AIIC would do well to become better informed about them, and the AI workstream is working with Prof. Benhamou’s team to gain a better understanding of the legal landscape surrounding AI and how it affects us.

9) We all want to learn more about AI – Back to those AI-powered tools, then. Rütten rightly stated that the next step has to be about fostering AI literacy. Her school, the TH Köln, is currently building a CAI tool campus intended to do just that. As an interpreter trainer myself, I agree wholeheartedly that interpreting schools urgently need to incorporate tech training, and specifically AI-related topics, into their curricula. But it’s not just students we need to be thinking about – practitioners are also calling for more training to help them understand what is at stake. I predict that 2025 will be the year that we all learn something new about AI and the role it can play in our work.

10) Interpreters must lead the debate around AI use in our profession – Rounding off this top 10 is my conviction that interpreters cannot afford to just stand by and observe developments. This proactive stance is what governs the AI workstream’s approach: our aim is to ensure that AIIC members have what they need to understand the world of AI, to guide our clients in decisions about its use, and to leverage its benefits for our own work. Equally importantly, we aim to ensure that AIIC’s voice is heard in the forums where decisions about technology are taken. 

So that is my year in AI, wrapped! I imagine that 2025 will bring us even more exciting news about artificial intelligence – and AIIC’s AI workstream will be there to help you make sense of it.

No comments: