How your company can leverage generative AI (with pros and cons)

The concept of artificial intelligence has been around for much longer than most of us realise. And even looking at more modern interpretations of the term, businesses have been making AI strides for decades. But in the past year alone, a type of intelligence called generative AI has surpassed many people’s expectations. Practically overnight, companies […]

by Dilyan Dimitrov

August 3, 2023

7 min read

generative AI scaled 1 - How your company can leverage generative AI (with pros and cons)

The concept of artificial intelligence has been around for much longer than most of us realise. And even looking at more modern interpretations of the term, businesses have been making AI strides for decades.

But in the past year alone, a type of intelligence called generative AI has surpassed many people’s expectations. Practically overnight, companies went from ignoring AI to looking for ways to leverage its power – some with significant success.

What exactly is generative AI, and how can you implement it in your own operation? Let’s find out. 

What is generative AI? 

Before we move on, it’s important to clarify that “artificial intelligence” is a very broad term. There are many types of AI sorted under various classification systems. 

In this article, we’re going to focus on generative AI and, more specifically, LLM. You’ve probably heard both terms thrown around a lot at this point.

Generative AI(or gen AI) is an artificial intelligence model that generates content – be it text, images, video or sound. It works by taking a prompt, or command, by the user and running it through various AI algorithms before returning the desired output. 

Some prominent gen AI examples include: 

  • Autodesk: Manufacturing AI that designs physical objects 
  • Dall-E: AI that creates realistic images and art 
  • Midjourney: Similar to Dall-E, creates images from text prompts 
  • Jukebox: Neural net that makes raw audio, music, and even singing 

LLM stands for Large Language Model. It’s a type of generative AI that is specifically designed to create text content. Creators of LLM-s train them by exposing them to massive amounts of training data and text. A neural processing architecture called a transformer lets the model learn patterns, syntax, grammar, and semantic relationships between words. 

Popular LLM-s include:

  • ChatGPT by OpenAI 
  • Bard by Google
  • Bing AI by Microsoft

How to use generative AI in your company 

Let’s get one thing out of the way. Unless you work at one of the world’s absolute top tech companies, there’s no point even thinking about training your own AI model on the scale of GPT or Bard. Developing and training generative AI takes an immense amount of time and money, not to mention very specialised expertise. 

Thankfully, you don’t need to do that – which is why most people don’t. There are much more efficient ways to leverage generative AI for your work:

Integrating a 3rd party model

Generative AI models like ChatGPT look like they can do it all. You ask a question, and you get a straightforward answer.

But when it comes to your internal operations, it’s not always that simple. The AI is trained on general data sets – does it know your specific data? Is it secure? When it gives you an answer, do you need to enter it into another system? 

For these and many similar reasons, you can’t just pull up ChatGPT and use it in your daily work. Instead, most companies integrate the generative AI’s API into their system. An API(Application Programming Interface) is a piece of software that lets applications connect and communicate with each other. Using an API enables you to use the product that API comes from. 

Let’s say you want to integrate ChatGPT. You build a layer of software, called an integration layer, over OpenAI’s API, thus integrating your internal information and data with the gen AI solution. Then everyone in your organisation accesses ChatGPT through that integration layer. You can also add supplementary functionalities, such as monitoring all requests that go through, enforcing company policies and more. 

That’s what we at Dreamix did when we created a GPT-backed chatbot for internal information management. The bot connects ChatGPT’s functionality with our intranet system, offering static and dynamic information about the company, projects, teams, and individual experts. 

What to watch out for:

Third-party API integration is the most common way companies use generative AI to help their work. The one potential drawback is the fact that you need to give a third party access to the data you’re integrating. Of course, API providers have robust privacy policies for you to review, but it’s still a risk for many businesses. One way to mitigate it is by masking sensitive data and replacing it with placeholders as you send the request. 

Run an on-premise pre-trained model

As an alternative to integrating an API of one of the largest models, you can download a smaller pre-trained AI model and run it locally. This option only became possible recently, as Meta made their LLama 2 model available for commercial use. 

Running a pre-trained model removes the risk of sharing information with a third party – everything happens locally. At the same time, as the name indicates, someone else(a large corporation) has already invested the time and resources to train the model. This is a good option if you want to avoid sharing your data and are okay with using a smaller/weaker model. 

What to watch out for:

Even though the model is pre-trained, you may still need to fine-tune it to your own needs after download – which requires specialised AI expertise. Then you’d need to buy and set up your own servers, which can be a significant expense, and maintain them for as long as you intend to use the model. All in all, this way of work may give you privacy, but it does require a serious amount of money and effort. And ultimately, the model you get still wouldn’t be as efficient as the big names on the market. 

Develop your own model from scratch

Finally, you can access generative AI benefits by creating your own model on-premises. This is rarely the recommended route to go – such a model would be a smaller and weaker version of what you can get by integrating a 3rd party API or even running a pre-trained model. But it could still serve your internal needs. 

This is a great option for companies wanting to keep their data private. Having created and trained the model yourself, you retain complete control over its functions. 

What to watch out for: 

Training an on-premise generative AI model, even a smaller one, comes with a significant investment in both hardware and maintenance. Unlike the pay-as-you-go model third-party integrations offer, this option comes with a much higher upfront cost. 

And after all that, it’s worth remembering that any smaller-scale model you train yourself isn’t going to live up to the big names in the industry, performance-wise.  

Pros and cons of using generative AI

There’s no doubt that LLM-s have taken the world by storm lately. And while using them does carry many benefits, there are some potential downsides, as well. Let’s look at some of both: 

Pros: 

Efficiency

Generative AI can help you streamline many internal processes and interactions, saving you and your employees time and effort. It can quickly generate large amounts of output, create reports, or answer questions. 

Scalability

Generative AI can handle large workloads without changing its output quality. As your business grows, you can easily scale up your AI system to handle an increasing number of prompts. 

Customisation

GenAI models can be fine-tuned and personalised to fit different business needs. You can train the model on your company data and access it with specific prompts to ensure you get the needed interactions. 

Cons: 

Hallucinations

This is the most commonly cited issue with artificial intelligence. A “hallucination” is a confident response by the AI model that is factually incorrect. If you’ve spent time talking to any of the generative AI models, you’ve probably encountered a few of those. An excellent way to minimise the risk of hallucinations is to set up your integration to always provide an original source alongside its response.  

Data privacy and security

As we mentioned before, integrating a third-party solution requires sharing your data, which could pose potential security risks. 

Quality control

As far as AI models have gone, they’re still not trained humans. They can produce errors, miss context, and generate irrelevant or inaccurate information. It’s sometimes necessary for humans to carefully review and edit an LLM’s output, which is no small amount of work in and of itself. 

Wrap up

Generative AI models are revolutionising the way we do business. The benefits of leveraging their capabilities are undeniable. However, it’s also crucial to acknowledge potential downsides and the challenges that a successful implementation may pose. 

There are several ways to utilise the powers of generative AI in your operation, with varying degrees of success. Depending on your priorities, you could go with a third-party integration for improved performance, download a pre-trained model for privacy, or train your own smaller mode. 

Whichever option you choose, making AI a functioning part of your organisation is an impressive technical challenge. If you require assistance with either planning or implementation, partnering with a custom software company with AI expertise can be immensely beneficial. 

Ultimately, the technology has the potential to transform various industries and streamline processes. By understanding how generative AI works, you can harness its power to drive efficiency, innovation, and growth in your operation.

A reader who loves writing, a marketer who loves tech, a nerd who loves sports. Dilyan, our resident writer, half-jokes that his days are filled with everything you can think of - except free time. He joined our team several years into his copywriting career - and he seems to feel at home here. Because, as he puts it, “there’s always cake at the office”.  If he doesn’t have his nose buried in a book, you can typically find Dilyan writing his latest piece, tinkering with his PC, or off swimming/cycling somewhere.