How to avoid generative AI hallucinations | NayaOne

Discover effective strategies to prevent generative AI hallucinations and ensure accurate, reliable outputs in your applications Read more here

author avatar

0 Followers
How to avoid generative AI hallucinations | NayaOne

Generative AI has dominated headlines for some time now, especially recently with news of Sam Altman, the CEO of OpenAI and the developer of generative AI platform ChatGPT, being ousted and rehired within a week. But will generative AI be a technology that banks use long term? According to Karan Jain, CEO, NayaOne, it absolutely is.


Are banks doing enough to explore the perceived benefits of generative AI?


According to Davis, while AI has been around in various forms for decades, “Only now that we have significantly enabled the way computing power is delivered, has AI been brought back to the forefront.” David added that many industries have been using a form of generative AI for years, but it has been limited, providing the example of how a form of graph AI powers search engines. In addition to this, vector AI presents further opportunities of being able to search through audio and video.


Increased risk management could decrease generative AI hallucination, where the large language models make up answers in a way that they sound factual. However, hallucinations only occur if the models are trained largely from information scraped from the Internet or suffer from knowledge cutoff – not aware of any post training knowledge. This may not be a problem for banks leveraging their own data.


What are the limitations of generative AI?


As with any type of AI, there are many limitations to generative AI. Hallucinations are just one such limitation. Jain agreed and said that limitations are driven by the early-stage nature of the technology. “The reason why it’s an exploratory phase is because it’s not yet proven out of the stack yet. As banks are regulated, it will take a while before generative AI starts making it to public use. Humans in the loop will be needed, in the same way they were when algo trading came out – it was a long time before we let the bots run on the trade alone. And even then, it was supervised by the human in the loop, and there is a kill switch.


What are the risks and ethical concerns associated with generative AI development and deployment?


Following on from Wolf’s point around ethical concerns, Davis believes that a “risk-free generation of AI” needs to be created. For this, banks will need “a strong deterministic or rules component to any AI system. Number two, the system itself needs to be trained with data that you’re legally permitted to use. And there must be an awareness of consumer consent laws and privacy laws. Nobody wants their personal data trained in AI, so banks are looking at that carefully and asking: how do we represent someone’s relationships without putting personal information in there?


What needs to be considered to deploy generative AI in a risk-free manner?


What does the future hold for generative AI? As evidenced by the views of experts within Scotiabank and Raiffeisen Bank International, there are many steps that need to be considered. According to Jain, as “leaders in the pack” move organisations, they will “cross pollinate” and in turn, adoption rates of generative AI will increase.


Beyond hallucination, data ownership and how to treat the product of that digital sandbox data, whether this can be monetised has been an age-old debate inside financial institutions. However, as the technology is now mature enough to be considered for external use – and regulations concerning AI and the end customer are coming to the fore – banks should not hesitate to leverage generative AI.


Discover effective strategies to prevent generative AI hallucinations and ensure accurate, reliable outputs in your applications Read more here.

Top
Comments (0)
Login to post.