SEF D.C. -- The Impact of AI on Securities Enforcement, Regulation, Compliance and Practice

Here is a transcript from the artificial intelligence panel at the excellent post-election Securities Enforcement Forum in Washington, D.C. The panelists were:

  • Kurt Wolfe, Of Counsel, Quinn Emanuel

  • Arlo Devlin-Brown, Partner, Covington & Burling LLP

  • Brian Kowalski, Partner, Latham & Watkins

  • Pam Parizek, Managing Director, Kroll

  • David Woodcock, Partner, Gibson Dunn

You can find the video here and the full agenda here.

00:00 - 00:34

Bruce Carton: Let's get started with our next panel. It's called The Impact of AI on Securities Enforcement, Regulation, Compliance, and Practice. This should be really interesting. And our moderator for this is Kurt Wolfe. He's Of Counsel at Quinn Emanuel in Washington, DC. Kurt's practice focuses on government and internal investigations, regulatory enforcement inquiries, and securities litigation. And as I think everybody probably knows, he also co-hosts the outstanding InSecurities podcast and is the program chair of today's event, meaning he'll be doing the introductions later on today. Kurt, welcome and thank you.

00:34 - 00:35

Kurt Wolfe: Good morning.

00:36 - 00:46

Bruce Carton: Next up, Arlo Devlin-Brown. He's a partner at Covington in New York. He previously served in the U.S. Attorney's Office for the SDNY and is one of the leading securities fraud prosecutors. Arlo, welcome.

00:46 - 00:47

Arlo Devlin-Brown: Thank you.

00:48 - 01:07

Bruce Carton: Next up, Brian Kowalski. He's a partner at Latham and Watkins in D.C. He's the former global vice chair of the litigation and trial department. Brian represents clients in government and regulatory investigations. He conducts internal investigations, advises clients on complex compliance issues. Brian, thanks so much for joining us. Welcome.

01:07 - 01:09

Brian Kowalski: Thank you.

01:09 - 01:28

Bruce Carton: Next up, Pam Parizek, Managing Director at Kroll in D.C. With more than 30 years of experience. She leads Kroll's Financial Investigations Practice for North America, and Pam previously served in the SEC's Enforcement Division, as well as the Forensic Practice of a Big 4 accounting firm. Pam, great to see you. Welcome.

01:28 - 01:29

Pam Parizek: Thank you, Mr. President.

01:29 - 01:52

Bruce Carton: Finally, very pleased to introduce David Woodcock. He's a partner at Gibson Dunn in Dallas and also the D.C. office, co-chair of the firm's Securities Enforcement Practice Group. And David previously served as Assistant General Counsel for corporate at ExxonMobil, and of course as well as director of the SEC's Fort Worth Regional Office, David, welcome, and thanks for joining us. Thank you. All right, Kurt, let me turn it over to you.

01:52 - 01:53

Kurt Wolfe: All right, good morning, everyone. Because artificial intelligence is so new and it's changing the way that we think about things, we actually wanna start this panel a little bit differently. And so if I can get some help from the AV team, we have a short video for you all this morning.

02:08 - 02:47

Gary Gensler (on video): I get why so many people are talking about artificial intelligence. It makes sense. AI is the most transformative technology of our time, fully on par with the internet. It's already being used in finance, where it has the potential benefits of greater inclusion, efficiency, and user experience. But let's face it, when new technologies come along, we've also seen time and again false claims to investors by those purporting to use those new technologies. Think about it. Investment advisors or broker-dealers might want to tap into the excitement about AI by telling you that they're using this new technology to

02:47 - 03:21

Gary Gensler (on video): help you get a better return. Public company execs, they might think that they will enhance their stock price by talking about their use of AI. Well, here at the SEC, we want to make sure that these folks are telling the truth. In essence, they should say what they're doing and do what they're saying. Investment advisors or broker-dealers should not mislead the public by saying they are using an AI model when they're not, nor say that they're using an AI model in a particular way but not do so. Public companies should make sure they have a reasonable

03:21 - 03:55

Gary Gensler (on video): basis for the claims they make and yes, the particular risks they face about their AI use and investors should be told that basis. AI washing, whether it's by financial intermediaries such as investment advisors and broker dealers or by companies raising money from the public, that AI washing may violate the securities laws. So everyone may be talking about AI, but when it comes to investment advisors, broker dealers, and public companies, they should make sure that what they say to investors is true.

03:57 - 04:31

Kurt Wolfe: All right, thank you. So with thanks to Chair Gensler, we now have some of the SEC's boilerplate out of the way, and we can dive right into thinking about how the SEC is addressing the rapidly evolving world of artificial intelligence, which as you heard Chair Gensler regards as the most transformative technology of our time. I'm pleased to have a terrific panel with me this morning. You may have noticed, though, that our panel does not include someone from the Division of Enforcement. So panelists, please speak freely, of course, this morning. But just to be clear, because we

04:31 - 05:04

Kurt Wolfe: don't have someone from the SEC on the panel, that does not mean that the Commission, or indeed the Chair, don't have anything to say about AI. In fact, they talk about it all the time. And if you don't believe me, just ask ChatGPT, which confirmed for me last night that AI is becoming a more prominent topic as the agency adapts to emerging technologies and their implications for securities regulation. All right, now I'm going to ask for a little bit of crowd participation here. Again, I know it's new, it's different, but you can do it, stick with

05:04 - 05:40

Kurt Wolfe: me. Plus I just wanna see if the coffee's still working. We're getting late into the morning. All right, so we showed you a video of Chair Gensler talking about AI and what the SEC is doing in the AI space. By show of hands, tell me if you think that is the only AI video Cher Gensler has released. Okay, nobody? Alright, show of hands again. Let me know if you think he's released more than five videos on AI. Alright, thank you for those brave souls out there who ventured to guess, you're correct. Actually, by my count, Chair

05:40 - 06:18

Kurt Wolfe: Gensler has put out six videos talking about AI expressly. That's more than crypto. That's more than any other topic. Now these mostly take the shape of what he calls his office hours with Gary Gensler videos, the first of which came out in February. It was called “AI Investors, Issuers, and the Markets”, And I think Chair Gensler made his point clear. He observed that you cannot spell chair without AI. He went on to release videos in March, the one we saw, August, September, October 1. Just last week, he's been on Bloomberg, C-SPAN, and CNBC talking about AI.

06:18 - 06:56

Kurt Wolfe: But he isn't alone. In March, now former enforcement director Gurbir Grewal released a video announcing the SEC's first of their kind AI washing cases. And I think the video was actually a first and only of its kind enforcement release. So what does this mean? It at least means that AI is top of mind for the chair, the commission, and the staff. So this morning we want to talk about how that focus on AI translate or doesn't translate to enforcement actions. We want to talk about the challenges and opportunities AI presents for regulated entities and persons. And

06:56 - 07:30

Kurt Wolfe: we want to take a peek around the corner to see what kind of developments may be taking shape on the horizon. Last night's event's notwithstanding. We'll talk about that a little bit, too. Let's start with enforcement. One more polling question. I promise this is the last one. Again, show of hands if you would indulge me. How many folks in the audience have been involved in an enforcement investigation relating to AI. Products, services, disclosures, how many folks have an AI case, have had an AI case? All right. Quite a few. I'm guessing not everyone raised their

07:30 - 08:05

Kurt Wolfe: hand, which means it's probably a pretty big chunk of this room. We are all the SEC practitioners in the space that are getting these cases. So not only are we hearing about AI a lot, but a lot of us are actually involved in cases that touch on it. So is that translating to enforcement actions? The answer I think is not really. Or maybe just not yet. We've seen a handful of cases, five, maybe six, that are squarely in the AI space, and they all focus on so-called AI washing. Now among those cases, we're gonna break them

08:05 - 08:20

Kurt Wolfe: into two buckets. One bucket are cases against asset management firms. The other bucket are public disclosure cases. And so we're gonna start with the asset management cases and Brian's gonna walk us through some of the SEC's action in that space.

08:20 - 08:55

Brian Kowalski: Thanks,Kurt. And as Kurt mentioned, the actual enforcement activity in this space really kicked off in March of this year with the announcement of the two enforcement actions against Delphia and Global Predictions. The prior panel touched on these as well, but for those just joining, we kind of thought we could talk about them a bit as well and offer our thoughts on those cases. The cases were announced in tandem. They were announced alongside video releases and speeches all clearly designed to drive home this idea that AI is very top of mind, despite the fact that they were

08:55 - 09:42

Brian Kowalski: relatively small resolution cases. The Delphia case involved a firm representing in regulatory filings, advertisements, and social media that it used AI and machine learning to analyze retail clients' spending patterns and social media to inform investment decisions. And according to the SEC, that just simply wasn't the case, and they weren't using that sort of technology at all. They may have intended to do it, but they weren't. In Global Predictions, similarly, there was an investment advisor that operates an interactive online platform that pairs with a chatbot to provide investment allocation recommendations. Global Predictions claimed that their technology incorporated

09:43 - 10:26

Brian Kowalski: expert AI-driven Forecasts and that it was the first regulated AI financial advisor and so again the SEC alleged that none of that was in fact the case then. So those two cases came out in March and then last month we saw a third case brought against an investment adviser in the AI space involved an advisor by the name of Rimar LLC and a related holding company that engaged in a SAFE offering in connection with the advisor. And they were alleged to have made false and misleading claims in pitch decks, online posts, in members-only chat rooms,

10:26 - 11:08

Brian Kowalski: and in emails about their purported use of AI-powered trading applications, extensive infrastructure of coders and other technology support that they were reportedly using to develop and hone AI technologies and references to sort of their capabilities, which were really, I guess, allegedly being sent offsite to third parties to actually develop. So these are the three cases we have so far. I think we heard this from Corey to some degree during the last panel, and I think our reaction on this panel is similar. They do strike us as relatively straightforward disclosure cases, at least in how they're presented.

11:08 - 11:43

Brian Kowalski: Maybe Global Predictions and Delphia would have a different perspective on how much nuance there really was here, but certainly the orders themselves are put out in a manner to suggest that these firms said they were using a particular kind of technology and they weren't using anything of the sort, and so it's a pretty straightforward disclosure case. They don't get sort of deeper into the nuance of how is the AI working, is it really AI, does it work exactly in the way you're saying it's working, and those cases potentially could be coming, but for now, these are really pretty straightforward from our perspective.

11:44 - 11:54

Kurt Wolfe: All right, so that's Bucket 1, the cases against asset management firms. We're gonna pull some threads together later. But first, David, will you tell us a little bit about the second bucket of cases, which I'm calling public disclosure cases?

11:55 - 12:23

David Woodcock: Sure, so I'll talk about two cases, because I think that's what there have been. There's a certainty there are more in the pipeline. And you'll hear this theme, I guess, Brian just said they were lying. They didn't have the technology. That's the theme of both of these cases. But because they're what we have, I'll talk about the details because I'm kind of interested, they're kind of interesting. The first one relates to a company called Joonko Diversity, Inc. And the case was not brought against the company, because it went into bankruptcy, but it was brought against the

12:23 - 13:02

David Woodcock: founder and CEO, Ilit Raz. And the idea was she had created this firm that was going to use AI to help with recruitment of diverse candidates. It was supposed to use AI to help clients find diverse and unrepresented candidates to fulfill their diversity, equity, and inclusion hiring goals and would solve unconscious bias. That was her mission statement, and that was the tool, the product she was working on, and marketing to investors. She raised, the firm raised about $21 million from venture capital and private equity firms. And it turned out that in fact her program did not

13:02 - 13:35

David Woodcock: do any of what she said. And so I the the facts are pretty straightforward. You know the SEC brought fraud charges against her for misleading investors about the quantity and quality of the customers there I think she claimed at one point there were 100,000, there were in fact never more than 30. The number of candidates, actually the number of candidates on the platform, she claimed 100,000, there were no more than 30. The quantity and quality of customers, she said they had several Fortune 500 clients, they had none. And then the company's revenue, which I

13:35 - 14:10

David Woodcock: believe there was almost none. There were very few paying customers. So pretty straightforward. I thought the division director put it right in his quote, the press release. Sometimes the SEC has really good press releases, and this one I thought was particularly good because he said, it was an old school fraud using new school buzzwords. And I think that's exactly right. There was also a parallel action filed by the Southern District of New York. Ultimately, this was not a real AI solution. She just raised money and was ripping people off. Lied to the board, which is sort

14:10 - 14:43

David Woodcock: of how this was discovered. Investors were pushing back, asking questions. She lied to them. More importantly, I think the takeaway for all of us in this room, she also lied on her LinkedIn profile, so do not do that. She created fake customer contracts, fake bank statements, fake customer testimonials. It was a disaster. So this could have been, one last point is this could have been an ESG case or could have been touted as an ESG case just given the focus of her business but it was really an AI case. The other one is a great complaint

14:43 - 15:18

David Woodcock: if you, you know, get a chance to see it because it has lots of pictures in it, and I like pictures. It involves a company called Destiny Robotics. And the founder, a woman named, I'm going to try to pronounce this, Megi Kavtaradze. I'll say that. Megi Kavtaradze. So I believe she was Georgian. But Destiny claimed to be working on the company, claimed to be working on the world's first humanoid robot capable of serving as an in-home assistant and companion. They were going to do this by creating a comprehensive map of the key mechanisms of human intelligence

 15:18 - 15:49

David Woodcock: and recreate that into a software system. And if you know anything about AI, you know that's kind of the game, right? That's what lots of companies are working on. That is not easy. The robot though was going to be able to form deep and meaningful relationships with humans and also reduce loneliness. So great idea. And then this is my favorite, though. It would also be able to perform various tasks, such as child care, psychological therapy, and crisis management. And I actually wondered if in a few years it might be on the panel here at this conference,

15:49 - 16:25

David Woodcock: discussing that. In any event, it was all a lie. None of that was possible. The program, the robot was just nothing. It didn't really, it used off-the-shelf software. But a couple interesting things. She misrepresented the tech. She misrepresented the endorsement she had from another crowd funder who happened to be her, I think, boyfriend or husband. She also misrepresented the use of funds. And when I tell you the amount of money she raised, you will be aghast, I believe. But it was all a lie. She raised, in about a year, $141,000 through crowdfunding, crowd sourcing, whatever.

16:25 - 16:54

David Woodcock: And she spent on personal items about $13,000. And I think this case could have also been marketed in a slightly different way because she used some of the money to pay for the application fees for MBA programs, which I guess she had started actually during the investigation. So it could have been sort of something more interesting there, but in any event, these are cases that are, they're easy, these are just frauds, There's no digging under the hood of what the system was doing. People were just misrepresenting it, so I think it's pretty straightforward.

16:55 - 17:28

Kurt Wolfe: So there you have it, folks. That's the handful of AI cases that the SEC has brought to date. We want to take a step back and think about whether there are some things we can learn about their approach to enforcement in this space generally. Now earlier on the day after panel, the panel talked a little bit about the enforcement division's focus on ESG. And I think many people have sort of drawn similarities or comparisons between what's happening with ESG and what we currently see happening in AI. And so Arlo, I mean, tell me, can we think about this or would it be fair to think about AI enforcement as ESG 2.0?

17:29 - 18:06

Arlo Devlin-Brown: I think it's actually a little more complicated than that, and if you excuse a terrible maybe metaphor, I think it's more, you know, ESG and crypto got together and had a baby, and that's AI, because it really has, you know, features of both. It has, you know, all the companies are saying they're going to do it, so you have the AI washing thing, but you also have some of the hype and opacity around crypto here. And the difference, I think, really, between both of those

18:06 - 18:46

Arlo Devlin-Brown: things is, I believe anyway, that AI is going to be a hugely transformative technology that is going to, like the birth of the internet, it may be rough and over time, but it's going to reshape public companies and their profitability. I mean, if you look at what's happening with Nvidia's share prices right now. So I think the difference really is you are going to see public companies make claims about their use of AI. If a chip maker said they had some technology that was more effective than Nvidia's GPUs and running inference for large language models or

18:46 - 19:12

Arlo Devlin-Brown: something like that, that could be hugely market moving. So you're gonna see, I think, these sort of two-bit, you're gonna see these disclosure cases, these two-bit fraud cases where I've got an AI that'll automate legal conferences, But I think you're also going to see public companies make representations about their ability to do AI that's going to come under scrutiny by the SEC. Those will be challenging cases to make, but I think you're going to see some of those cases.

19:13 - 19:51

Kurt Wolfe: So maybe ESG isn't quite the lens we should look at these cases through. But I think one of the through lines here is disclosure obligations or disclosure shortcomings by the entities that have been charged in SEC enforcement actions. And I mean, I think Corey Schuster spoke to that in the last panel when he said something like, not a quote, but they're applying a standard thought process or analysis to these types of AI cases, particularly looking at how firms use AI and what they say about how they use AI. So I mean, Brian, what do you think? Are these kind of just bread and butter disclosure violations at the end of the day?

19:52 - 20:25

Brian Kowalski: Well, I think that what you've seen in the cases we talked about, that seems to be largely the case. That is not to say that there isn't anything notable or interesting about these cases. I mean, there can be emphasis and focus on different types of disclosures. And to the extent that focus has been turned towards AI, that's important for public companies, investment advisors, broker dealers to understand and focus on. I mean, the cases that we talked about involve claims that were false, claims that couldn't be substantiated with any sort of documentation or sort of real proof.

20:25 - 21:00

Brian Kowalski: And so taking it as an opportunity to think carefully about your disclosures in this space and what is sort of the answer to the question, if we get, well, how can you prove this is actually a true claim? I think that's an important element, even if these are sort of just at bottom basic disclosure cases. The other thing that's interesting is, you know, following Delphia and Global Predictions, the plaintiff's bar turned its attention to this AI washing issue and there's been multiple securities class actions filed. So that's a, just sort of it continues to increase the

21:00 - 21:28

Brian Kowalski: focus on this area. I think Arlo's absolutely right, though, that even if there was sort of a decision not to focus on AI as such in the way that maybe has happened with ESG, this is a technology that is sort of, it's coming. It's part of what our clients are going to be using. And it's just simply going, it's not something that's entirely dependent on the SEC deciding to make it an enforcement priority. It's something that's going to be an important part of the business ecosystem, generally.

21:29 - 21:46

Kurt Wolfe: Agreed. Pam, I know you've been thinking a lot about another enforcement action where there were actually parallel criminal proceedings that hasn't been touted as an AI action, but nevertheless hits on some of the themes. It involves some AI tools, and that is the Kubient case. So will you tell us a little bit about Kubient and what happened there?

21:47 - 22:28

Pam Parizek: Yep, absolutely. Like most panelists have already mentioned, we see a lot of the same old traditional fraud elements happening again and again. This one, for me personally, was an interesting one and a particularly troubling one as a former chair, audit committee chair, of a small publicly traded digital media and advertising company. And I found the facts just to be really just truly preposterous. And I think it underscores a lot of the real dangers of AI today. So Kubient was a company that provided what it claimed was a solution for digital media and advertising companies. It claimed

22:28 - 23:16

Pam Parizek: in its public filings and in its offering memoranda that it was able to differentiate using artificial intelligence human traffic, which is used to price digital media ads from traffic and hits that were generated by software bots and the like. And it presented this information to the public. It conducted a public offering. And one of the challenges was that this technology had not really been proven. They represented that they had a beta test and that the beta test had demonstrated that this tool was actually 300% more effective than the tools used by certain other customers' partners in

23:16 - 24:03

Pam Parizek: detecting fraud associated with some of these digital media ads. In reality, it was all false. They didn't even obtain the data to do the beta test. Instead, they used their own data from different customers, anonymized that data, and claimed to put it into the tool and made all of these representations based upon this fake data. But it gets even worse because, you know, the company had earned not very much, $1.3 million based upon two customers who were supposedly in this data test. And for people who were really paying attention, that represented about most of the company's

24:03 - 24:52

Pam Parizek: revenues for Q1 2020, and it was roughly 95% of its revenue for the period in which it conducted its secondary offering in August of 2020. But yet it went to market. First IPO raised over 12 million dollars. Second IPO raised over 20 million dollars. But the day of the secondary offering when it went effective, it came to the attention of the CFO and of the audit committee chair that there was some questions about the validity of the data and that those questions might impact the reliability and integrity of the financial statements that had been previously been

24:52 - 25:42

Pam Parizek: filed, that had been touted in the company's roadshows to investors. And instead of raising a flag, maybe pulling the offering. The CFO and the audit committee chair did absolutely nothing. They conducted no due diligence on the allegations. They, Instead of addressing a square on, they just really stood aside. They did have a consultation with counsel, but one would say that they really needed to do more than what they did. And it got even worse because when these executives met with the company's independent auditors, they did not disclose the information to the auditors. The CFO was aware

25:42 - 26:29

Pam Parizek: that there were allegations that there were misstatements out in the public domain. And yet, in his inquiries with the auditors and in his certifications, none of that was disclosed. Likewise, the audit committee chair, I would say even worse, not only excluded the auditors from their audit committee meeting during which they discussed this very issue, but again, during the annual year-end audit, when the auditors asked their standard questions about whether you're aware of any actual or suspected fraud. They said no. So not surprisingly, you know, the CEO was charged, the CFO was charged, the audit committee was

26:29 - 27:13

Pam Parizek: charged for failure to exercise their fiduciary duties to their constituents. And I think it's really an important lesson. And it's really a travesty, too, because these individuals are put in a position of trust and confidence. And the audit committee is supposed to be able to take a stand. And I recall reading something that just really threw me for a loop, which was the audit committee chair said, oh, we can't say anything, we'll get fired. It's like, hello, your responsibility is to undertake an independent investigation to review these matters. You're a fiduciary for the investors who are

27:13 - 27:53

Pam Parizek: relying on you to make this important exercise of your power and authority in the interest of the public. So that was a huge failure. And I understand that maybe most boards don't really know a lot about AI. And it is vague in many respects. But there is a duty to stand up. And if you don't know, I mean, look at all the professionals we have in this room who could advise people, whether it's on the legal side, whether it's on the accounting side, whether it's on the data science side. There are other ways that this type of matter could have gone.

27:54 - 28:24

Kurt Wolfe: All right, so that's the landscape in terms of the cases the SEC has actually brought. And I think one of the things that occurs to me is that we're maybe getting some mixed signals here in terms of how important this actually is. At the same time that we have Chair Gensler issuing new videos every month, Kubient was announced in September, but it wasn't touted as an AI case. And then we've got two cases, Rimar and Destiny Robotics in October. So we didn't hit the deadline to even include that in the SEC's annual enforcement report, nice bold

28:24 - 28:43

Kurt Wolfe: heading on continued focus on AI enforcement. So I think we're getting a little bit of mixed messages here. So a question, I'll start with David, but throw it out to the panel. Should we expect to see more of these cases in the pipeline? And maybe, just in light of recent events, do we think this will continue to be a priority in the new administration?

28:44 - 29:17

David Woodcock: So I don't know whether it will be a priority in the new administration, but I think like Arlo said, it is AI, however one defines that, is becoming so pervasive at companies and advisors, so much money is being put into understanding how companies can use it, and it's being developed at such a rapid pace, it will definitely be a theme. There will be more AI cases, no matter which, well we know what the administration is now, but under any administration. And these kinds of cases that we've all been talking about, they're just frauds. And there's no,

29:19 - 29:39

David Woodcock: they're gonna be brought no matter which party is in power. And so I think you will continue to see those as people take advantage of the sort of frothy, hot, you know, high times for AI to put AI in their names, they'll do all kinds of things, they'll sell it when they're not actually doing it. So I think you will see more cases under this administration or any other.

29:41 - 29:43

Kurt Wolfe: Anybody else wanna jump in on that one?

29:44 - 30:19

Brian Kowalski: No, I agree with David. I think, you know, the SEC has already sort of flagged a lot of potential enforcement areas in this space beyond disclosure that frankly are a little more novel and interesting and dependent on the actual, the fact that AI itself is involved In particular, whether it's fraud and market manipulation and trading issues, conflicts that could arise from AI being used to identify and prioritize investment interests, or just generally using an AI tool that you don't entirely understand how it's going to work and what outputs it's going to generate for any sort of

30:19 - 30:54

Brian Kowalski: business or trading function. There's an interesting case that's not in the securities context, but I think is sort of interesting and sufficiently analogous. You could see how it could sort of transpire in the markets as well, where Southern District of New York indicted an individual in September who had used AI to create thousands of songs with randomly generated song and artist names, created bot accounts to stream the songs, and generated $10 million in allegedly unlawful royalties. So that's something that it doesn't take a lot of imagination to see that coming up in a market context either.

30:55 - 31:03

Brian Kowalski: So in other words, I think we see these disclosure cases, but I suspect there's going to be a broader set of AI related enforcement matters over the next several years.

31:03 - 31:22

Kurt Wolfe: Yeah, I mean, so you and David are both kind of hitting on the point that some of these are just fraud cases, or there's an element of fraud. And whenever that happens, there's potential criminal liability. We saw that in Kubient. So Arlo, tell us a little bit about where the DOJ, or the USAOs, or some even state criminal prosecutors may fit into this matrix?

31:23 - 31:55

Arlo Devlin-Brown: Yeah, so I think you're going to see the same thing essentially that you're seeing with the SEC and the DOJ has made some announcements in the past year that go to this. I mean, one set of announcements back in February, and there's been some follow-up. But essentially, the department is going to focus on the use of AI tools to commit ordinary frauds, ordinary crimes, and sort of go after them in a harsher way. They're going to seek a sentencing guideline enhancement. And I think you're going to see that. You're going to see some of these, make

31:55 - 32:29

Arlo Devlin-Brown: an example of cases, much like you saw things, the Silk Road case, this goes back, but the dark web heroin market and and you know murder-for-hire market people had the idea perhaps that that was on the web and wasn't going to be taken as seriously as something more in the real world and I believe the guy's still doing life who was running that. So you're going to see, I think, some people being made examples of there. But I think the other way the DOJ is going to tackle this that may be of much more

32:29 - 33:15

Arlo Devlin-Brown: relevance to a lot of our clients is on the compliance side, because as you know, when companies face DOJ investigations as SEC investigations, the DOJ often has a evaluation process in deciding the appropriate resolution for that company that focuses on its compliance program. And the DOJ has now put out guidance this fall that makes very clear they're going to look at how companies regulate risks around AI in their compliance programs and in their risk operations of the companies generally. And that'll apply to financial institutions and non-financial institutions alike. So I think you're gonna see scrutiny applied at that level too by the DOJ.

33:16 - 33:54

Brian Kowalski: Yeah, I think it's interesting as I think that the updated evaluation of corporate compliance programs talks about how AI-related risks are mitigated and also how AI could be used as a compliance tool. And I think that's sort of, it's an interesting scenario, I think, because it's so nascent how AI might be used effectively as a compliance tool. And I think we've seen in other contexts, whether it be data analytics or things like that, where the SEC and DOJ learn about how companies can effectively use those tools through presentations about companies or firms compliance programs and through

33:54 - 34:26

Brian Kowalski: investigations. And I think over time, we should expect that they're going to be hearing some of the interesting cutting-edge things that our clients are figuring out how to do with AI in the compliance space, that's going to start to set a bar for what expectations are around it. So we could have a very sort of similar dynamic there that we've had with other sort of aspects of compliance programs that where sort of a market can start to get sad. There's probably an opportunity now for some folks to get out ahead on that front, while everybody's sort of trying to figure out how best to use these tools.

34:27 - 34:39

Kurt Wolfe: Well, Pam, I know that you focus a lot on how regulated entities may use AI to further their compliance efforts, along the lines that Arlo and Brian are talking about. So can you tell us about some of the tools or some of the applications that companies might use?

34:40 - 35:24

Pam Parizek: So first of all, of course, the regulated entities absolutely can and do use AI. And I know in the last panel, there was a lot of conversation about some of the ways that regulated entities do use AI. I thought for this segment, I'd share something a little bit different, which are survey statistics about how risk and compliance departments are actually using AI in their business. So there was a recent Moody's report that was titled Insights for Risk and Compliance Professionals. And it basically surveyed 550 participants and 67 countries on their understanding of AI, their use of

35:24 - 36:11

Pam Parizek: AI, some of the risks, whether it was actually implemented and whatnot. And not surprisingly, as we just heard, the use of AI is most prevalent in financial institutions, banks and fintechs. Outside of those types of financial services companies, there's also greater AI use by larger companies. They sort of look at different relative size, who has the budget to invest in AI, to invest that will recite in AI for those companies that have legitimate AI products. And across all of the folks who are surveyed, including corporates across all industries, most risk and compliance professionals agreed that the

36:11 - 37:06

Pam Parizek: three most impactful applications for AI were first, and this will not be surprising to anyone here, transaction monitoring and risk detection. Second, individual and entity profile and screening. And third, automation of manual tools to create efficiency. On the flip side, AI companies have not really wildly adapted or implemented AI yet for their risk and compliance function. Only 9% of company surveys actually were actively using AI. Another 21% were sort of in the beta test phase, bringing it on, starting to assess. Some 50% were considering it, and the rest were not considering it at all. And these

37:06 - 37:42

Pam Parizek: results are largely driven by the quality and the integrity of the data in house in these organizations because it's pretty well understood that in order for AI to be effective, you need to have good data. And that's where we see a lot of fintech companies fail and have problems with their AI applications, because there are issues with the data. It's not being properly reconciled, or whatever the case may be. So it's a real issue. The final data point I'll share with you on this topic is that 82% of respondents agreed that AI will, in the future,

37:42 - 38:25

Pam Parizek: offer significant advantages with respect to risk and compliance. But there are still concerns around data integrity, around the reliability of outputs, around over-reliance on AI, to the detriment of the human component that's really needed to validate results. So I think the big takeaway is that, there's tremendous promise, but there will be need for some more robust regulatory guidance. We might not see it in the US, we'll talk about those later, We'll see some in the EU and elsewhere. But it's a challenge that a lot of risk and compliance professionals are struggling with right now.

38:26 - 38:42

Kurt Wolfe: It's interesting that financial services companies are overwhelmingly using this. At least the survey data would show. Arlo, are there particular risks that arise from broker dealers relying on AI as a compliance tool or part of their back office processes?

38:43 - 39:14

Arlo Devlin-Brown: Yeah, I mean, I think the watchword for company compliance is just going to be have to stay current. Like, it was surprising hearing some of those statistics about companies just saying, I don't want to have anything to do with it, because I think you're going to have to have something to do with it, and you're going to face risk in either direction. I think the SEC will take a harsh look at broker dealers or investment advisors who outsource something to AI, some back office function, with the hope that it's going to make life simpler. And it

39:14 - 39:52

Arlo Devlin-Brown: actually results in securities rules being violated. I think that'll be a problem. But at the same time, I think if companies don't adopt technologies to avoid fraud, that AI is going to help. AI can cause fraud. AI can also police fraud. I think you're gonna have risks by not doing some of those things. And just one quick anecdote, which just I think illustrates how the state of the art changes and how something that may be top of the line defense goes to high risk. I was on the phone recently with a unnamed financial institution calling for

39:52 - 40:24

Arlo Devlin-Brown: some customer service thing that said, oh, would you like us to use your voice print? Would you like us to use your voice print for identification? 3 years ago, that would have been gold standard, right? It's not a pin, it's not a password, it's great. Now, it's very easy to make a replica of someone's voice with AI. And so I don't know if that standard holds up, but I think people are just gonna have to stay very current in regulated entities and otherwise, and they're gonna have to figure out smart, prudent, risk-based approaches technology.

40:26 - 40:58

Brian Kowalski: Yeah, I think, look, I think those are great points, and I totally agree. I mean, I think there's a risk of misunderstanding and overestimating what AI can do that can be a challenge for both investment advisors, broker dealers, or public companies are trying to use this as a compliance tool. I mean, the way I think about sometimes the difference between data analytics that we've traditionally used and sort of true generative AI's. Data analytics, you have your data set, you have an algorithm, and you have some level of predictability about what it's going to do. And with

40:58 - 41:35

Brian Kowalski: true sort of generative AI there can be a lack of understanding of exactly how it works. It can sort of train or teach itself things that it starts doing that is not what the sort of training at the outset was. And so you can't just sort of turn it on and expect that it's going to run the way maybe a data analytics algorithm rhythm would. I think it probably is gonna require more oversight in looking at the output, looking at what it's doing. Is it changing the way it's carrying out its function? And that sort of monitoring, I think, is gonna be really important going forward when we're starting to talk about really true generative AI.

41:37 - 41:52

Kurt Wolfe: David, you mentioned to me earlier something I think is very interesting, and it's ways that public companies maybe are using AI to mine peer firms or competitors disclosures, maybe to see what they're doing. Tell us a little bit more about how they're leveraging AI in that way.

41:52 - 42:26

David Woodcock: So, whatever I say about this is going to be dated almost the time I say it, because it's changing so rapidly, and it's sort of these things that are converging, this low-cost computing power that is nothing like the world has ever seen. But also, incredible amounts of data that are being required to be disclosed, required by regulatory agencies, required by investors. And so you have, every time I read an SEC rule that says, it's always at the very bottom, and it says, the data here is required to be an XBRL format, whatever. I just think, oh,

42:26 - 42:59

David Woodcock: that's somebody who's gonna be looking at this, mining this data one of these days. And then you have just the increasingly, I think as Brian said, data analytics is not new, right? We've all been using Excel spreadsheets and paper. It's not new. But what AI, what machine learning offers is a sophistication that is every day getting better and better. And you combine those three things and no company will be able to avoid this. And so we don't have a lot of time, but some examples of ways of setting aside the compliance piece, which is a great

42:59 - 43:37

David Woodcock: transactional monitoring contract review, those sorts of things. Crap, document reviews, that's going to be revolutionized in the next few years. But the other things are like, all of the sustainability data that companies are required to issue. Europe is, requires a lot of, CSRD requires a lot of data. I doubt the SEC's climate related disclosure rule is going to require that, but that's another issue. A lot of data's being put out. And so what's happening is activists, environmental groups, and others are mining that data and using AI tools to monitor it. And so they're comparing companies' GHG

43:37 - 44:05

David Woodcock: emissions as much as they can in real time to all their competitors. They're comparing their sustainability reporting up against two frameworks that are in place. So if a company says we're compliant with this TCFD or something like that, they can easily see how that company's matched up. And so I think it's those kinds of things that are developing and they're going to come to companies doorsteps regardless, right? They're happening already And I think that's where we're just gonna see a tremendous explosion in the next few years.

44:07 - 44:21

Kurt Wolfe: All right, we've got just about a minute left, so I wanna peek around the corner while we have a second. I know, I think each of you maybe has a closing thought on what we may see from a regulatory framework, either coming into existence soon or in the next administration. So we have 30 seconds.

44:21 - 44:51

David Woodcock: I'll start. If you haven't read it, if you love European regulations like I do, I would highly recommend the EU AI Act. I'm not going to tell you much about it. It's supposed to create a comprehensive framework, but I'm going to read a quote that my freshman college son, who's very into this, told me when I asked him if he had ever seen it. He said, this is the EU saying we understand how product development works while they really have no understanding at all of how it works. But it is regulation at a level

44:51 - 44:59

David Woodcock: that we don't even pretend to know, and it's coming both in this AI space in the EU. So, I recommend a good bedtime reading.

44:59 - 45:08

Kurt Wolfe: All right. I see we're going to get hooked here by Bruce. Time is up. Thank you all for your attention. I think lunch is just about ready. Enjoy the rest of the conference.

 

Previous
Previous

SEF D.C. -- Keynote Remarks and Q&A Discussion with SEC Acting Director of Enforcement Sanjay Wadhwa

Next
Next

SEF D.C.: Financial Firm Focus—The SEC’s Enforcement Priorities