Why AI Needs Expert Supervision
AI offers incredible efficiencies in the organisation and presentation of information. That is a significant part of training. So, naturally, there has been a rush to the use of LLMs and agentic AI in our industry. Even seasoned learning managers have felt pressured into unguarded usage.
Let loose, these great tools can swiftly corrupt your training library and damage your reputation.
Well-known amongst the risks of poorly supervised AI usage, is the near certainty of some correct-sounding but dangerously wrong ‘facts’ going unnoticed by inexperienced or under-pressure human vetting. Once one such example is stumbled upon by a client, you lose every saving and maybe more. At the very least, you then have to suspend every piece of training that was touched by the same AI process and go through it forensically.
Poorly managed AI can also build litigation landmines into your training library, in the form of unacknowledged usage of other people’s content. With detection tools rapidly evolving, content that today passes all plagiarism checks, may soon be readily traceable to a proprietary source. Less well known, is the risk that organisations are starting to run into with copyrighting their own content, where AI was involved in generating that content. Clearly, if your team has no learning design competence on par with the content you seek to protect, you are unlikely to succeed in establishing copyright. But even if you had some human oversight, IP rights are not automatically implied, as they used to be when that expert was creating the content unaided by cloud-based AI tools. To demonstrate ownership of IP, a number of qualitative measures are emerging. Foremost here is being able to demonstrate the extent of human expertise in assuring the sources, implementing demonstrable content-specific guardrail processes, and thoroughly adapting and editing all AI generated elements.
Responsible AI Usage
Various kinds of AI (artificial intelligence) and ML (machine learning) have increasingly valuable roles in learning technology environments. For example, AI can be impressively deployed across a library in content discovery and in individualizing learning journeys by learner profile.
We use generative AI in content development only where efficiency can be gained without loss of quality or content integrity, and then, strictly within guardrails.
Responsible use of generative AI in education and training recognizes the strengths and limitations of the software. We use AI only with the explicit direction of a client, and we supply it only with approved source materials. This requires more vigilant planning, learning design and subject matter expert validation than material fully originated by a trained instructional designer, which in turn means that the efficiency gains are not as compelling as might be the case in some other fields. So much so that in several content areas (e.g. legal compliance, engineering SOPs, OSHA regulations) we encourage clients to rely instead on our experience and conventional processes to develop robust, assuredly accurate content, efficiently.
In the hands of experienced learning designers there are, nonetheless, good applications in support of learning design. The Laragh team has a good head start in this area. Since 2020 we have produced training on AI, ML and LLM (large language models) processing for various clients. We have also used ChatGPT in a controlled setting and developed internal guidelines for safe deployment on learning material development.
Laragh’s Unique AI Offer
We are not aware, globally, of any e-learning content partner with near the extent (length and breadth) of learning design experience that our team retains. To protect your product integrity while using AI strictly within parameters worked out for your particular training scenario, there is no better team for you to be working with.