Podcast
Root Causes 276: ChatGPT and Identity Reputation


Hosted by
Tim Callan
Chief Compliance Officer
Jason Soroko
Fellow
Original broadcast date
February 9, 2023
ChatGPT and similar AI tools are dominating the public's mind these days. In this episode we discuss the potential for people to attempt to use ChatGPT as a source of reputational analysis, KYC, and other information about individuals, companies, and other entities. These activities are potentially subject to both error and deliberate misdirection. In this episode we explain why.
Podcast Transcript
Lightly edited for flow and brevity.
We are in the business of security and so I wanted to just talk about alright all of you security folks out there who are scratching your head going, hey, how can I use this in my world? When you think about things like authentication, when you are thinking about things such as Know Your Customer, you might be tempted as a security professional to query ChatGPT even in an automated way, right? That’s going to be very common in the future, to
It wasn’t that long ago that – maybe even still to this day – that you had to answer two to three questions about yourself that only you would know. Well, the problem with ChatGPT is that there’s all kinds of attributes now that can be looked up on just about anything and anybody. The problem being, as OpenAI themselves will say, there are no guarantees to the correctness to the information that is being told to you. Just a simple lookup of myself told me so and also, keep in mind that ChatGPT as well, when - - it’s not like traditional computer systems, Tim, that you and I are used to from years ago where if a computer that’s been programmed to do something like AI doesn’t quite know what you meant when you asked it because there’s ambiguities, especially ambiguities in the information that it’s finding and it’s not sure how to show it to you in the model that you’ve defined for the AI. OpenAI has said in one of their lists of limitations of ChatGPT that what they’ve chosen to do is to not prompt the inputter, the entity that’s providing input, for clarifying questions to break the ambiguity.
So, therefore, there’s two ways that the answer you are getting could be wrong. One is it’s flat out wrong because the information it was trained on was wrong or it’s interpreting it wrong. But I would say that the number one problem that could be wrong is just that it’s not sure how to interpret the information and there are ambiguities that it will not prompt you to clarify. Those are two very important things when you are developing security attributes. They’ve got to be accurate, and then if you are using AI, it would be preferable to be asked to clarify on ambiguity and OpenAI is not providing you either of those things. So, security professionals who are into Know Your Customer – KYC – and attribute building and I’m talking to all you young set out there who are doing really cool things with distributed identities and all that kind of stuff, be careful using AI. Maybe in the future this will change and the AI model will be better.
Well, thank you very much, Jason.

