Beyond the peak Trinity research assistant and co-founder of CaliberAI, Neil Brady, navigates the generative AI landscape, recognising its current hype-cycle peak and affirming its lasting impact and highlights five things to be aware of as the dust settles in 2024 As the world enters 2024, technology analysts everywhere are trying to anticipate the likely next instalment in the generative artificial intelligence story. In January, for example, Accenture CEO Julie Sweet mused that ‘most companies do not have mature data capabilities and if you can’t use your data, you can’t use AI.’ While there’s much truth to this, in the wake of the foreseeable decision by The New York Times to file suit against OpenAI and Microsoft, claiming ‘billions of dollars in statutory and actual damages’ for the unlawful use of its data to train large language models like ChatGPT, a more prescient choice of phrasing might have been ‘others’ data’. This question of data quality, ownership, legality and provenance, will be the defining issue of artificial intelligence in 2024 and beyond, perhaps second only to compute cost. At CaliberAI, we have been working with Large Language Models (LLMs) for almost five years, and as a team, we have a collective experience in the implementation of multifarious disruptive technologies, from the internet to social media. My co-founder and father, Conor, oversaw the creation of The Irish Times ’ original web domain (www. ireland.com), while CaliberAI’s Chief Technology Officer, Paul Watson, oversaw the creation of Storyful, one of the world’s first social media newswires and later acquired by Newscorp. Thanks in no small part to the help of an Advisory Panel of eminent minds from the worlds of journalism, law, and philosophy, we have built our technology in anticipation of the kinds of problems now facing OpenAI and others in relation to copyright, and other data integrity imperatives. For example, while right now most policy and technology observers are rightly focused on The AI Act and its implications, several provisions of The General Data Protection Regulation already regulate AI to a great extent, including Recital 71 and Articles 13, 14, 15 and 22, all of which mandate the kind of algorithmic explainability we have baked into our products. So, here are five things to understand about artificial intelligence going into 2024. 1 . It’s all about the data Whether viewed from the perspective of a data scientist, machine learning engineer, copyright lawyer or writer, it is clear that data integrity and quality is what will differentiate and reduce risk for AI services and products going forward. Relatedly, in the wake of The AI Act’s tiered approach to risk management, it is also clear that the use of safeguards and guardrails, to mitigate reasonably foreseeable risk, will be crucial. 2 . The European Union (EU) will remain the world’s preeminent data regulator, but regulation is splintering The EU has led the way on data protection since 2016, and it will continue to do so through The AI Act. This was affirmed by OpenAI’s recent decision to shift responsibility for data control to OpenAI Ireland Limited. At the same time, other companies, such as Facebook, are increasingly suggesting that it won’t be possible to offer their products and services within the EU, supposedly due to the high regulatory bar. Meanwhile, as the EU agrees to place an outright ban on things like biometric categorisation systems, other jurisdictions, such as China, have done the opposite. Splintered regulation is unavoidable, but the EU is as close as the world can get right now to supranational enforcement of standards.
Download PDF file