Various kinds of AI (artificial intelligence) and ML (machine learning) have increasingly valuable roles in learning technology environments. For example, AI can be impressively deployed across a library in content discovery and in individualizing learning journeys by learner profile.
However, at Laragh, unless a client explicitly directs us to use AI for a particular purpose, we strictly exclude AI use in all of our content generation and localization services.
We believe that neural LLM (large language model) AI such as ChatGPT, in its current phase of evolution, introduces unacceptable risks for learners and providers if used in the creation of learning materials. Editing sufficiently for well-known AI-generated ethical/cultural missteps and deep plagiarism – problems for which there is no ‘good-enough’ bar – is an overhead that on its own negates the initial efficiency. However, even more critically to educators, the introduction of plausible but false information and made-up attributions is near impossible to assure against, even by having a consulting subject matter expert to review the content in detail, line by line and image by image. In our view, this problem may be inherent to the synonym vector mapping model behind the recently released generation of LLM AI, which by design appears to prioritize sounding authentic over being correct. While being the ‘layman’s expert’ may be adequate in some fields, we think education is not one of those.
Content accuracy is a primary responsibility of the education provider. Where our products are used to train people on legal compliance, cyber vigilance, foreign corrupt practices, or safe chemical handling for example, the implications of providing false information are potentially catastrophic. At Laragh we have the experience and processes to develop robust, accurate content very efficiently, and we believe that short-cuts often prove to be false economies. Quality content appropriate for global audiences still has to be human-made by suitably experienced people.