NTT DATA Business Solutions
Thomas Nørmark | Haziran 12, 2024 | 7 min read

Generative AI: balancing threats and potentials in a transforming world

Generative AI is here – from finance to supply chain, from healthcare, product design and customer service to content creation. Never before has technology had such a transformative impact on our world, empowering decision-making and driving creativity, innovation and productivity.

As more businesses embrace its opportunities, we can’t ignore the many challenges and risks. How should we manage these while responsibly utilising the technology’s potential? In this blog, we dive deep into generative AI, highlight its opportunities and threats, and share tips on how best to navigate these new waters.

Profile of a woman overlaid with digital interface elements representing AI.

Overview of generative AI and its evolution

Generative AI is a subset of artificial intelligence that, through algorithms and models, can generate highly realistic human-like content such as text, images or audio. The models learn from extensive datasets to quickly understand patterns and relationships to create original outputs.

The roots of generative AI lie in machine learning and neural network research from the 1980s. However, at that time, generative models were limited due to the lack of computational power and data resources. Thanks to the now greater availability of digital data and advances in deep learning with models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), generative AI can process vast sets of unstructured data and has evolved to unimagined levels of sophistication.

Generative AI now creates insightful, creative, and relevant content almost indistinguishable from human efforts. It’s used in diverse business and tech applications for content generation, chatbots, product design, data augmentation, and more.

Yet, the revolution has only just begun.

The landscape of potentials in generative AI

From product design and development, supply chain optimization and customer service to content creation, generative AI is revolutionizing industry practices across all sectors.

Supply chain companies use AI algorithms to streamline processes, analyze vast amounts of data to identify inefficiencies and automate forecast planning, thus saving costs and improving performance. In the manufacturing industry, generative AI can help with spare part identification just by taking a photo of the needed part.

Generative AI can help healthcare professionals by providing real-time patient data and developing treatment plans to enhance patient care and healthcare efficiency. In product design and development, AI Algorithms explore design variations and expedite prototyping and testing, increasing efficiency and product innovation.

Generative AI examples are found in nearly every industry as it moves towards mainstream adoption.

The potential risks and challenges of generative AI

While the potential for growth and innovation is enormous, risks and ethical implications also exist.

Generative AI produces content that blurs the line between the real and artificial, raising questions about the ethical responsibilities of the creators and users. A growing web of ethical considerations emerges as the technology becomes more pervasive and humans rely more on AI-generated content.

Addressing the issues requires a thorough understanding of generative AI’s underlying principles and processes because it’s here that the ethical considerations are integrated into the frameworks.

Without a deep understanding and careful regulation of AI, we’re open to risks, such as accuracy and misinformation, bias, intellectual property risks and hallucinations. In addition, a solid ethical foundation for generative AI is vital to building societal trust and nurturing future advancements.

Let’s look in more detail at some of these risks:

  • Accuracy risks

According to a recent McKinsey report, one of the most significant risks companies experience using generative AI is lack of accuracy (56%).

This can range from sending a customer the wrong information to having incorrect data on compliance documents, which could have far more costly and legal consequences.

Generative AI models like ChatGPT are trained on data sets consisting of hundreds of billions of parameters. However, responses to questions and prompts may contain inaccuracies if, for example, the inputs used are outdated, incomplete or inaccurate.

Many generative AI models, particularly large language ones, depend on older inputs and operate from a specific ‘knowledge cut-off date’ – when the AI model was last updated. Any information available after that date would not appear in the outputs. Therefore, transparency about the cut-off date and user awareness is essential in minimizing such risks.

Information inaccuracies may also arise due to the limitations of the training data. Generative AI only operates within the constraints of the data it has available. It may have trouble providing comprehensive contextual awareness where the information required lies beyond its training data. This can result in responses that lack nuance and contextual understanding.

  • Bias risks

There may be societal biases embedded within the algorithms and data sets of generative AI models, resulting in biased content generation or decision-making, perpetuating prejudices and discriminatory narratives against certain groups.

Addressing bias is a major challenge as generative AI becomes more widespread. Some mitigation strategies include more diverse data selection and exercising human vigilance with continual monitoring of outputs.

  • Intellectual property risks

Intellectual copyright infringement is also high on the McKinsey risk report at 46%. Several high-profile cases exist where AI tools have incorporated proprietary material without the creator’s consent. Most recently, the New York Times sued OpenAI and Mircosoft for copyright infringement over millions of newspaper articles being used to train chatbots.

Due to its complex nature, generative AI presents many challenges to traditional intellectual property norms and regulations. Courts are now considering applying the existing laws to generative AI content.

Apart from copyright infringement, there are the questions of licensing, usage rights, plagiarism, and ownership of AI-generated works.

  • Hallucinations

AI models use hundreds of billions of data sets for training and learn to make predictions by finding patterns. However, suppose we feed it incorrect or incomplete assumptions. In that case, the model may learn incorrect patterns and present glaringly false information as facts, termed hallucinations. The models lack the reasoning to apply logic or consider the data inconsistencies they provide.

Hallucinations can mislead people, spread bias and misinformation and, in a business environment, cause reputational damage and erode trust.

  • Ethical risks

The appearance of generative models like ChatGPT and DALL-E from OpenAI has brought the debate on ethics surrounding AI to the forefront.

Such considerations include transparency and accountability – AI systems often operate in a ‘black box’ where users cannot see how deep learning systems arrive at their decisions. This can have consequences in, for example, healthcare, where understanding how decisions are made is vital. Likewise, clarifying accountability and taking corrective action is necessary should things go wrong.

There are risks around data privacy and security. Because generative AI handles large amounts of personal and sensitive data, Organizations failing to prioritize this may run up against regulatory requirements such as Europe’s GDPR and face serious consequences. To avoid this, they must implement robust safeguards such as data encryption and access controls and conduct regular security reviews.

Sustainability is also part of the ethical discussion – Training large language models (LLMs) consumes enormous amounts of water and electricity and has a large carbon footprint. Before implementing such a system, companies must weigh the environmental impact of their actions.

The economic and operational implications

We’ve seen generative AI’s enormous potential to increase productivity and efficiency in all sectors. What are the economic implications of this?

In a 2023 report, McKinsey estimates that the total economic benefit of generative AI could amount to $6.1 trillion to $7.9 trillion annually. 75% of the value will come from customer service, marketing and sales, software engineering and research and development.

A substantial benefit will come from increased productivity, specifically in internal processes, where generative AI could augment workers’ capabilities by automating some of their activities. This can change the nature of work and have a societal impact as knowledge workers transfer to other tasks and readjust to new roles.

While it’s tempting, amid the hype and furore, to want to implement generative AI fully in your business, it’s first worth considering a few factors.

Be sure that the benefit of implementing the system will justify the cost. Training a large generative AI model is expensive. It may also need extra infrastructure and maintenance, adding to the costs. In addition, these activities may negatively impact your business’s sustainability goals.

Therefore, carrying out a cost-benefit analysis beforehand is vital. You may discover, for example, that a smaller LLM can also supply your needs for a fraction of the cost with less environmental impact.

Every business has different needs and goals. It’s advisable to see generative AI as a strategic tool in your toolkit and find the best fit for your business objectives.

 

Close-up of a men's rowing team in action.

Implementing generative AI - a rapid adoption or cautious integration?

What strategy should you adopt if you have conducted a value case analysis and decided to implement generative AI? Is it better to commit to a rapid integration or a more cautious, experimental approach?

Factors influencing your strategy include your organisation’s technical expertise, systems’ adaptability and robustness, and risk management experience.

It’s important to have an up-to-date understanding of the current state of generative AI and the options available. In addition, gathering insights from early adopters of the technology can provide valuable advice on implementation and help you choose the appropriate model. Whether you decide on a full rapid or partial implementation, defining success metrics in advance and validating the system with a pilot project is vital.

This will prepare you for any surprises, leverage best practices and minimize the risks to your business.

How to manage generative AI risks

We discussed some of the risks of generative AI above. As it evolves and gains mainstream adoption, we must continuously evaluate and manage the risks carefully.

Here are some strategic steps you can take:

  • Use first-party data and responsibly source third-party data

When training generative AI models, using first-party data collected directly and voluntarily from the customer is safer and preferable. It will ensure that the data is accurate, original and traceable. If you use third-party aggregated data obtained from advertisers or provider companies, ensure such vendors comply with the data privacy standards and verify their practices before engaging with them.

  • Train employees on appropriate data and generative model usage

Those working on the front line with generative AI should be trained in AI ethical best practices to assess outputs appropriately for bias or inaccuracies. It will also help cultivate a valuable security-aware culture within the company.

  • Invest in cybersecurity tools that address AI security risks

Companies must include AI in their security operations (SecOps) IT strategy. Part of this involves utilizing specialized cybersecurity tools that cover all security risks throughout the AI pipeline, from data collection and model training to end-user applications. Due to the wide application of AI in companies’ infrastructure, a solution that offers end-to-end protection against generative AI threats is vital.

  • Build a team of AI quality assurance analysts

In addition to creating an AI security-aware culture in your company, employing a team of AI Quality Assurance Analysts is advisable to ensure the models perform as expected and align with AI compliance guidelines. It can also conduct AI audits, maintain safety standards, monitor cybersecurity tools and conduct internal company training.

  • Research generative AI models before using them

Before implementing any AI models, do your homework first. Focus on, for example, whether the model’s applications fit your company’s predefined goals. What is its reputation in the field? Does it have a proven record in your industry? Evaluate the technology stack – understand the underlying algorithms and frameworks to see if they are compatible with your company’s infrastructure, and verify whether the model is trained in your domain.

Scalability, ease of integration and user-friendliness are also factors to consider.

Conclusion

Generative AI now permeates every aspect of society, impacting our personal and professional lives almost imperceptibly. This disruptive technology has incredible transformative potential for our world, and we’ve only used a fraction of its capability. The opportunities for progress and innovation in all industries are enormous, as are the potential risks and threats.

It’s a delicate tightrope to walk, pursuing innovation and progress while maintaining ethical practices, human originality and independence. To harness generative AI’s power responsibly, we must understand the technology’s underlying principles, protect its data, have solid regulatory guidelines, and ensure there’s a human in the checking loop. Ultimately, the technology is there to help us solve problems and flourish, so let’s move forward with our safeguards in place.

Learn more about our Innovation & Technology services

More blog articles about innovation