Now that we’ve hopefully inspired you with some examples, let’s explore some of the challenges in implementing AI-driven tools at an enterprise level.
Training models with the right data
With chat-based LLMs like ChatGPT, Gemini (formerly Bard) and all other, offering surprisingly accurate answers, it’s easy to think that deploying a similar tool internally can provide the same quality out of the box. But of course, a model can only be as accurate as the data it learns from.
In addition, we had to consider questions such as:
- How do we to crawl data and ensure it will become available for the model?
- Can we trust it to interpret and accurately translate documents in other languages?
- How will it interpret data in different currencies and number formats?
- Which software platforms and protocols do we connect it to? (CRM, ERP, Email…)
While a well-defined model is certainly powerful, it’s unfortunately not a substitute for well-organized, structured data. Strong data management practices are still essential in order to get the most from it. For this reason, we believe it’s better to pursue a custom implementation instead of an out-of-the-box solution. There are many conversations that need to happen, from small project teams to the board, and a standard SaaS product may not have the customisation your organization needs.
Protecting privacy
Naturally, we’d like to allow LLMs to crawl all our internal data, however there were limitations to consider.
- Do we allow it to crawl salary information and other sensitive data? How do we prevent this information from leaking if a user asks a related question?
- Should we log each query from our users, or should queries be encrypted and confidential?
In our development process, it was quickly apparent that we needed to balance the desire for progress with the need to protect sensitive information. We also chose to compromise on employee oversight, to encourage users to trust the tool and use it as much as possible. In another implementation, logging every query may be more valuable, as it could allow us to review interactions and improve the algorithm.
Preserving creativity and autonomy
We’ve all heard stories of drivers who relied too much on their navigation system and ended up driving into a field. Google and Apple Maps can’t be relied upon blindly, and neither can the output of our generative AI systems. Enterprise level tools can’t operate without a method of verifying their results, but this needs to be carefully designed so it doesn’t take as long to verify an answer as it would to find it the old-fashioned way.
Similarly, no tool should be a substitute for creativity. Generative AI can enhance our existing ideas, by using image generation services to illustrate a blog post for example. Or it can provide a list of ideas for inspiration, however companies who lead will use this output merely as a starting point. Remember, when a genAI tool writes a sentence it’s merely predicting the next most likely word, while an image generator essentially shows us which coloured pixel appears most frequently next to another in images tagged with particular keywords. These tools are engineers of average. Unless we’re happy to mix all our paints until the palette turns brown, our human input is essential.
Reinforcing existing biases
Much has been said about the risk of AI reinforcing historical biases. Some good starting points are the Netflix documentary Coded Bias, as well as The A.I. Dilemma, a 2023 presentation from The Center for Humane Technology, which prompted the Biden administration to issue its AI executive order and explore options for regulation. In summary, it’s crucial to understand that AI-driven systems reflect the biases of the data they’re trained on. Rather than pinning all our hopes on designing this out of our systems, our responsibility is to still apply a critical human eye to the output.
Generative AI misconceptions
Speaking on a more practical-level, genAI is unfortunately not a technology that can be easily implemented by downloading an app or subscribing to a SAAS tool (at least at the enterprise level), however, there are elements that may be simpler than you expect.
LLMs don’t have to be black boxes
While the exact process used to generate an answer cannot be determined, this doesn’t mean that LLMs are incapable of providing sources. As we detailed in our sustainability report example, citing the source of information is entirely possible. Furthermore, when a solution is custom-built, algorithms and training data are known and belong to the organization, as opposed to being proprietary entities.
GenAI tools are a significant investment
Enterprise businesses tend to have plenty of skilled technical staff. We believe it’s best to use existing human resources as much as possible, for example, your organization may already be highly competent building databases or training an LLM. Similarly, it doesn’t always make sense to employ a technical person in-house. Perhaps instead of hiring a Python Developer, your organization would be better placed subscribing to a SAAS that runs on Python. The savings could be invested in a Data Scientist who can restructure data in a way that LLMs can fully understand. These strategic decisions can make significant impacts on cost.
Learn more
At NTT DATA Business Solutions, we’re excited to see what can be built using generative AI. We also take our role seriously, as a responsible developer of what is a truly revolutionary technology. To learn more about generative AI for business, explore our related articles, or contact a member of our team.
Learn more about our Innovation & Technology Services