SEF Central 2024: The Impact of AI on Securities Enforcement, Regulation, Compliance and Practice
For those who missed Securities Enforcement Forum Central from last week (or even those who attended), here’s a transcript of the panel on AI’s impact on securities enforcement. It was quite interesting — especially, we think, on the importance of lawyers’ understanding how AI is actually used in corporate supply chains, etc., so they can give advice that makes sense. You can find the discussion on Docket Media’s YouTube channel here. Katten’s Danette Edwards introduced the panel, and the panelists were:
Nicole Wells — senior managing director at FTI Consulting in Chicago
Jessica Magee — partner at Holland & Knight in Dallas
Jerome Tomas — partner at Baker & McKenzie in Chicago
Joanna Travalini — partner at Sidley Austin in Chicago
Jeremiah Williams — partner at Ropes & Gray in Washington, D.C.
Time stamps are obviously on the left. In many cases a speaker continues speaking but a time stamp intervenes so you can track to the video if you’d like. We’ll follow up with more transcripts in the coming days so they’ll be a searchable resource.
00:00 - 00:38
Danette Edwards: We have a really exciting panel next. It's called the Impact of AI on Securities Enforcement Regulation, Compliance, and Practice. Today our moderator is Nicole Wells. Nicole is a senior managing director at FTI Consulting in Chicago. Nicole has more than 20 years of experience providing expert consulting and expert services in forensic accounting investigations, antitrust criminal investigations, and complex civil litigation. She previously provided attestation services in the audit and assurance practice at Deloitte. Jessica McGee is our next panelist. She's a partner with Holland and Knight in Dallas, a former general counsel and a former senior officer in the . . .
00:38 - 01:12
Danette Edwards: SEC's Division of Enforcement. Jerome Tomas is a partner at Baker and McKenzie in Chicago. He is another SEC enforcement alum. He worked on internet and cyber fraud matters at the commission. Joanna Travalini is next up. She's a partner at Sidley Austin in Chicago. Earlier in Joanna's career, she worked in assurance and auditing at Deloitte. And last but not least, Jeremiah Williams. Jeremiah is a partner of Ropes and Gray in Washington, DC. Prior to joining Ropes and Gray, Jeremiah was a senior counsel at the SEC where he was a member of the Asset Management Unit.
01:15 - 01:55
Nicole Wells: All right, Thank you. Can everyone hear me? Start there. Great. All right. Good afternoon. We're in the home stretch. Thanks for staying along with us. We have a riveting panel here today to talk about all things AI. Should be no surprise to anyone that this is a topic of the series, but really excited to dive in and actually have some very recent updates we'll be addressing along the way. So with that, we're gonna cover kind of broadly 4 main areas. We'll first touch upon recent AI enforcement actions, discuss regulations both here and across the pond, and . . .
01:55 - 02:32
Nicole Wells: some recent activities that have come up in the past couple of days. We'll also touch upon AI compliance and we'll hear from our esteemed panelists on what they're seeing with respect to compliance measures amongst their clients and what they're advising. And then we'll talk about the use of AI in investigations, both in things that we understand the SEC is doing as well as what we're seeing in our own day-to-day experiences. So thanks again to everyone joining us and to the panel today. So first I think AI, a broadly used term, when we talk about it and . . .
02:32 - 03:05
Nicole Wells: when, you know, regulators view it as well, it is pretty, it is mostly a broad term, but I think the core of kind of the hype and the corresponding regulatory response and some of the enforcement is really more around the generative AIs, so the creation of content such as text, images, videos, audio, chat, GPT and the like. But we'll also touch upon some more of the traditional AI tools and measures that we're seeing growing in this space as well today. So first let's get to enforcement actions and I think on a previous panel, Kristin Pauley . . .
03:05 - 03:32
Nicole Wells: had mentioned the topic of AI washing briefly in terms of the SEC following closely, potentially false and misleading statements as it relates to AI, similar to crypto, SPAC boom, ESG initiatives. So Jessica, I'll start with you. Can you share with us, discuss some of the recent enforcement actions related to AI washing that we've seen? Sure, thank you. It's nice to be with you all today.
03:32 - 03:35
Jessica Magee: Sure, thank you. It's nice to be you all today. So, wow, I feel really loud right now.
03:35 - 03:36
Nicole Wells: There's a little bit of an echo here, but I think we're okay in the audience.
03:36 - 04:17
Jessica Magee: Okay, it's my executive presence really coming through. So you're probably already familiar with the March 2024 actions. There were two so-called AI washing actions announced on the same day by the SEC in March of this year against global predictions and another RAA, Delfia if I'm pronouncing that correctly. You know, this whole washing term that the SEC is using, green washing, AI washing, right, it's just not truth telling, as alleged by the SEC. And I don't actually think there's anything particularly compelling about the AI underpinnings of those actions. . . .
04:17 - 04:54
Jessica Magee: You're right, Kristin spoke about them briefly earlier. Really what was going on there is you had RIAs that were separate from each other sort of doing the same thing as alleged, which is telling clients in the world through ADV and other documents and advertisements. We're using AI to improve your investment experience, to improve investment recommendations. In the latter instance, Global Predictions was saying that they were the first regulated AI financial advisor, which is sometimes what the SEC is just like asking to get a look from the staff. But really, the case boiled down in both instances to:
04:54 - 05:30
Jessica Magee: You said you were using AI and you weren't. You said you were using certain data in certain ways and you weren't. So that's no different than any other blanket misrepresentation case. I actually find the Global Predictions case interesting for the other things beyond the AI because there were marketing role issues around hypothetical performance. There was an interesting hedge clause issue in that matter. So to me that matter really stands out as a firm that was getting a look from Exam, AI was proving to be a little problematic because they told Exam yeah we're not . . .
05:30 - 06:06
Jessica Magee: . . . actually doing what we said we were doing, and then they got snapped for it by the staff a couple of years later in the enforcement action. I'm a little surprised that we haven't seen more of those kinds of cases. I think it's pretty typical, as you say, whether it's emerging technologies or interests of the day, social investing, greenwashing, crypto, what have you, that the staff is incentivized and often can find examples where company or individuals says X and it is not X, right? That's easy stats to get if you can demonstrably prove a misrepresentation. So I . . .
06:06 - 06:20
Jessica Magee: think the surprising thing about our AI washing or early AI enforcement is that we haven't seen more of it, but I think it's certainly to come. I think we'll probably see it coming in the policy space, which we can talk about a little bit later.
06:20 - 06:41
Nicole Wells: Yep, agreed. Thank you. On that note, Jerome, and we should have had a J name as a moderator as well, So to shout out to my colleague Jake, he probably should have been, we had a full Jay panel. But Jerome, welcome. Can you tell us a bit about some of the risks of perhaps over or under disclosure as it relates to AI capabilities?
06:42 - 07:25
Jerome Tomas: Yeah, no look, I think Here's the crux of the issue with AI. If you look at Director Grewal's speech from, I think it was April, maybe March of this year, something like 65% of investors think that AI is highly important for companies to work into their operations. And if you include the moderately important group, it's like 80. So there is a perception that the investing public or that investors want companies to incorporate AI. The reality is, and I'm actually, side note, side hustle, I'm actually planning to put together a little album of AI generated music. . . .
07:25 - 08:07
Jerome Tomas: I actually have a couple songs that I've already put onto the system, but what you realize in actually using it, using AI for real world applications, is that it's a ways away from actually being viable from something that is actually going to be widely and immediately usable in the marketplace. And so what does that mean? There's a difference in expectations. There's investors want this, the technology is here, executives, management want to do what they can to meet that thirst or those expectations. And so I think you have to look at it from, are your competitors actually using . . .
08:07 - 08:43
Jerome Tomas: AI in a way that's better than you? If you look at Chair Gensler's comments from April, If you're using AI, you have to be truthful about what your AI capabilities are and how far it is along on the maturity standpoint. But you also have to talk about what those risks are. And those risks could be you could be hit by AI type spoofs. There's a lot of talk about that, but I think the thing that nobody's talking about that we're going to see quickly is that there are going to be other companies, more nimble companies, that . . .
08:43 - 09:23
Jerome Tomas: could move quicker and are better able to employ AI in a way that makes your margins in your business potentially obsolete or at least cuts revenues. I'm thinking right now if I actually had any skill whatsoever, I would be looking at marketing music in a way that is way cheaper, way cheaper, than the publishing companies and the recording companies are able to put music out. And what that would do, if I use a different platform, I could potentially, again, and this is all hypothetical, you could really attack a business model that's been in place pretty much . . .
09:23 - 09:45
Jerome Tomas: since the phonograph has been created. And so that's what Gensler's talking about as well. Are there companies that can use AI in a way that you're not necessarily able to use that will make you obsolete. So I think you have to look at it both from not only the opportunities and speak truthfully about your opportunities, but also the risk. Because I do think that the SEC will be kicking around at those tires as well. . . .
09:46 - 10:17
Jessica Magee: I think that's exactly right. And I think that some of the risks that we've heard them talk about is concentration risk, where you've got the same kind of AI tools being deployed, third-party tools being deployed and regulated entities, things of that nature. And If the power goes out, that's how you know I'm a subject matter expert on AI. When the power goes out, your brokers can't deliver service, investment advisors can't, what have you. And with the human element, which we'll also talk about today, the human oversight element of what are we using, why are we using . . .
10:17 - 10:24
Jessica Magee: it, and how is it impacting our business and our service? These are big risks that you need to be prepared to talk about. I think you're right.
10:25 - 11:03
Jerome Tomas: And I think you also have to know, or at least search for where the gaps in the actual utility of whatever AI function you are using are. Are there inherent biases? Are there limitations that your tool won't use? Here's a great example. About a month ago, our firm encouraged all the lawyers to use a proprietary firm-based tool, AI tool, that runs on ChatGPT. And what I did is, if anyone's familiar with the song, and I knew the tool would say no, is I gave the prompt to basically write the song Blasphemous Rumors by Depeche Mode. . . .
11:03 - 11:38
Jerome Tomas: If anybody knows the song and they know the underlying subject matter, write it down in search. You knew that the AI tool was going to be limited in its willingness to give you the content of that. But if you don't know where those limitations are, because you're not trying to game the system, you are actually opening yourself up to, if you're a regulated entity, certainly not knowing what the true use of your AI investing platform is. But even as a public company, you're not aware of what the limitations are of the representations you're making and the risks that are posed by the tools you're using.
11:39 - 12:21
Nicole Wells: That's great. Thank you. So we talked a bit about the recent enforcement actions as it relates to false and misleading misstatements and disclosures. If any of you have followed, just last week, there was an interesting enforcement action that came out with respect to false statements that led to false revenue being reported. So I think it's now showing kind of the trend of where companies, individuals in this particular case can utilize AI claims to potentially misstate financial statements and have a significant impact on company themselves and other gatekeepers. So Jessica, do you wanna tell us just a bit about the Kubient matter?
12:22 - 13:02
Jessica Magee: I think that's right. Yeah, so the matter's called Kubient with a K. And it's really interesting, I think it is an incremental move in how AI enforcement is developing because it is a false statement. But the false statement is not we have AI, we're using AI, here's what we're doing with your AI. It's that hey, we have captured revenue. We are recognizing revenue based on our successful user, in this case testing of artificial intelligence and using that proof of concept and proof of revenue capability to raise money into offerings, which turned out to be a problem . . .
13:02 - 13:42
Jessica Magee: for the CEO who was charged and here he was saying that they were using 2 customers and testing their data through their flagship product, Kubian AI, KAI, to sort of real time detect fraud in online advertising auctions. And in fact did not get the customer data, actually had a contract for it, but didn't actually receive the customer data. Still wanted to recognize that revenue though. So said, hey CFO, run me some sample reports so I can show the auditors so we can raise the money, and that in fact happened. So that's a problem for the CEO, . . .
13:42 - 14:14
Jessica Magee: it's a problem for enforcement. I think sort of like the earlier cases where there are also really interesting notions playing out in terms of the marketing rule, in Kubient they filed a second action at the same time against the former CFO and the audit chair for gatekeeping problems because these are individuals who hold the gate hold the line at the company and when they come to know they're not necessarily seeing a red flag flying by in traffic, someone's like literally holding a red flag in their face saying, yeah, I didn't really put customer data through that, . . .
14:14 - 14:52
Jessica Magee: it was a sample set, they didn't look at it. In fact, they didn't investigate and they didn't take it to the auditor. So you see 10(b), 17(a)(1) lying to auditor charges. So a problem that could have been the third AI, okay, misrepresentation, but they really had it, turned into a much bigger problem at a governance level. And I think for the staff, that's another important message of we're gonna stay in this space because it is such an exciting opportunity. There's so much greenfield, companies that need that revenue. They need to secure an offering. They need to be . . .
14:52 - 15:04
Jessica Magee: delivering positive results. Seeing sort of that fraud triangle take shape, it's just another place for that to play out. That was last week, Kubient with a K, if you're interested in it.
15:04 - 15:42
Nicole Wells: Certainly seems to be an anticipated trend. We'll probably see more like that. Turning to regulation, we have a couple areas that we want to touch upon here. We'll start with investment advisors, broker dealers. In August, Chair Gensler made some remarks about conflicts of interest in artificial intelligence, stating that the SEC remains focused somehow, evolving AI affects investor issues, and the markets connecting them. Jeremiah, can you tell us a bit more about the pros broker dealer and investment advisor rules and how the SEC is looking to address potential conflicts of interest across the investor interactions?
15:44 - 16:28
Jeremiah Williams: Yes. The full name of the rule is a little bit of a mouthful. It's conflicts of interest associated with the use of predictive data analytics by broker dealers and investment advisors. Predictive data analytics is the term that the SEC is using for really a broad category of technology. The rule has a defined term, cover technology, which includes not just things that we're talking about like chat, GBT, and traditional AI stuff, but really anything that's sort of a computer or a mechanical means to try and facilitate a business. So it's a very, very broad term. . . .
16:28 - 17:17
Jeremiah Williams: The rule is meant to cover any uses of these covered technologies and investor interactions, so things where you have a firm interacting with an investor. And the rule basically requires registered entities, investment advisors, broker dealers, to identify conflicts, first you have identify conflicts, and then eliminate or neutralize the effect of the conflict. It's a principles-based rule, so it's very general. It doesn't really say how a firm would have to go about eliminating or neutralizing these effects. It gives one example in the context of a robo-advisor. So people are familiar with robo-advisors. These are like automated investment advice to investors. . . .
17:17 - 18:00
Jeremiah Williams: And an example of eliminating the conflict would be just denying data. So you basically just don't let the tool have access to certain data. An example of neutralizing the effect would be to let the tool have data but then provide other data that could be weighted that would kind of counteract that. So those are some examples to give. This rule is very comprehensive, requires policies and procedures, training, annual reviews, record keeping. It was proposed last year and it's still open so timing of it is very, very uncertain. There's also been some pushback against the rule. . . .
18:02 - 18:33
Jeremiah Williams: Basically, the general comment, if you look at the comments, I'd say the most common comment is that it's unnecessary because we have general anti-fraud protections and that those anti-fraud protections would cover this technology or any other foreseeable technology and this rule is basically redundant and not needed. So stay tuned, we'll see how this goes. But this is one example of the SEC, we're talking about the AI washing. This is going beyond that to actual specific substantive regulation of how firms can use AI.
18:34 - 19:19
Jerome Tomas: Hey, Jeremiah, I'm interested to hear your thoughts because we're talking a lot about the positive element, right, like there's a thirst for AI, like people want AI. I also think there's an element that people are suspicious of AI and artificial intelligence and they don't necessarily always want everything that matters a lot to them to be driven by computer algorithms. They want human eye and human brain touching on things. So where do you see from an asset management question, where is there a risk or potential for advisors in my mind to potentially understate the amount that
19:19 - 19:36
Jerome Tomas: they are relying on AI in order to make their services seem more bespoke, whereas in reality they're largely reliant upon an underlying algorithmic trading model or AI-driven trading model. And there's a potential for a 10b-5 claim there as well.
19:36 - 20:08
Jeremiah Williams: Yeah, I think there's definitely a risk in either direction. And one of the reasons these tools are popular is because you can provide this sort of recommendation advice much more inexpensively. So yes, there certainly is a risk for an advisor to market this as bespoke and custom to you, but really kind of looks the same. And what you talk about as far as people wanting eyes on, I think the rule tries to address that because the testing, that's the idea of having people look at this is this makes sense, it's doing what we expect. And so . . .
20:08 - 20:15
Jeremiah Williams: the idea is that kind of does incorporate that human element, but I think you're right that this is something where it's going to be a challenge for people to get the balance the right way.
20:15 - 20:37
Jessica Magee: I think it's going to be a challenge if that rule gets implemented to have staff that can investigate and enforce under it. Not to speak ill of the staff, but that takes expertise to crack into that. It's one thing to say you said this and it wasn't true, right? And that's why you see the messaging of cases over time. But that's a big lift.
20:38 - 20:49
Jeremiah Williams: Well, that actually touches on something that I'll talk about a little bit later. But the SEC has talked about its need for talent in AI and what it's doing about that. So it's at least trying to address that, but far from clear as can be sufficient for what they need to do.
20:50 - 21:24
Nicole Wells: That's right. So taking our discussion just across the other side of the Atlantic very briefly and kind of in order of events that have happened over the past few months, the EU came out with their AI Act, effective last month, which aims to foster responsible AI development and deployment in the EU. And as you may know, the act introduces kind of a more of a uniform approach and a forward-looking definition of AI that really focuses on more of a risk-based approach and assessment. So we won't spend the next 22 minutes going into all the details around . . .
21:24 - 21:35
Nicole Wells: it, but we did want to briefly touch upon it and just, Jerome, I'll pose this question to you. How do you see the EU AI Act impacting US companies?
21:35 - 22:21
Jerome Tomas: Well, I think, to talk about what the EU AI Act is quickly, it's an act that breaks down companies that are using AI into prohibited, high risk, generally permissive but subject to disclosure, and lower risk uses. And those four gates, if you will, are subject to different either prohibitions outright or assessments to determine whether it's appropriate for that AI to be used for that particular purpose, disclosure for example, on the lowest two. And so what you have to do is you have to say, well, am I actually going to be using AI in a way that . . .
22:21 - 23:08
Jerome Tomas: triggers that law or a law like that? It's incredibly expansive. It covers companies that use AI systems in the EU. It also covers companies that have a physical presence in the EU, but it also covers companies outside of the EU who use AI-driven content that is shot to the EU. So in my hypothetical music publishing case, Jerome Music LLC, if I stream AI generated music and it's only in America, but then I shoot it into the streaming service through Spotify or otherwise into the EU, arguably I'm triggering that third element of application. So what you have . . .
23:08 - 23:44
Jerome Tomas: to do is know, well, is what you're doing going to be subject to that law? Here's the other sort of rub, which is, there's no federal regulation like that right now in the US, but states on an ad hoc basis, right? America, gotta love it. It's, you know, the states are developing laws that are mimicking the EU Act. And so what companies sort of need to be looking out for is what rules are they going to follow if they want to use AI in a way that's going to potentially trigger either the EUAA Act or the . . .
23:44 - 24:19
Jerome Tomas: NIST guidelines, right? So companies want a guideline that's kind of the lowest common denominator that they can generally follow. So looking at what services you're providing and knowing if you are triggering that. If so, you probably are going to want to look at, well, do we want to follow as a baseline from a compliance standpoint? Because not only do we have that law to worry about, but we have our other stakeholders and shareholders who want to know what we're doing if we're acting responsibly for AI. It's knowing what rule you're following, either prescriptively or permissively, and . . .
24:19 - 24:36
Jerome Tomas: then setting up policies and procedures in place to follow it. This question is beyond the scope of what we're talking about here, but the world is shifting and the EU, no surprise, is taking the lead on this. They took the lead on privacy in the 90s and they're taking the lead now on AI in the 2020s.
24:37 - 25:11
Nicole Wells: Yeah, as mentioned on a previous panel, ahead of the US on this one. Coming back to the United States, Jerome, you mentioned a few specific states have moved ahead, but in the US, I think we've seen an executive order come out and we've seen a few statements by the potential presidential candidates. Of course, none of us having a crystal ball as to what's going to happen in November and how that might change things. Joanna, let me ask the question to you. What kind of approach to AI regulation policymaking do you see occurring in the US? And if you need to take two paths, that's fine too.
25:13 - 25:52
Joanna Travalini: Yeah, so as Jerome mentioned, there is no federal law right now. So what we're seeing is at best some regulation or some suggestion of laws that are, what I would say are tangentially related to AI, trying to touch upon whether it be certain principles or case by case basis that could be related to AI, but there's nothing broad sweeping. Nicole mentioned there have been some, an executive order as recently as last year in October that is attempting to direct Congress to establish certain privacy laws that will govern the AI use by federal agencies. It's also supposed . . .
25:52 - 26:33
Joanna Travalini: to be directed again towards federal bodies to try and develop guidelines in the AI sector. But what we're seeing is really the states taking the lead on developing policy that is particular to the needs of those states. Three states that are kind of leading that charge right now, we have Colorado that has already enacted its first AI legislation. That's supposed to be going into effect in 2026. It's very focused on bias, avoiding discrimination in the implementation and use of different AI systems. Utah also has a law that came out in 2024 that's very focused on consumers . . .
26:33 - 27:13
Joanna Travalini: and certain disclosure obligations related to people who are impacted by AI and it'll be enforced by Utah's Division of Consumer Protection. And then there's California, which has come out with a proposed regulation. It still has not been finalized yet, but is in the works. That is very focused on the disclosure of various businesses' use and implementation of AI-related systems. So I think what we're seeing here is that absent any broad sweeping federal law, which again, there's been some suggestion of it, but there's nothing strictly regulating it, the states are going to be taking over. And as . . .
27:13 - 27:26
Joanna Travalini: Jerome mentioned, that creates some complications for businesses as they're doing conducting business across state lines in terms of whose law are we following and how are they different, depending on what state you're conducting business in.
27:27 - 28:00
Nicole Wells: Yeah. That's great. Thank you. And then I just, because I can already see our clock ticking away here. We have plenty of other fun topics to cover. Maybe let's just briefly talk about competition issues because in July, the FTC, DOJ, CMA, and European Commission came out with a joint statement that highlighted their focus on the risks that AI brings with respect to competition issues. So, Joanna, do you want to give a kind of a brief summary of where they're kind of zeroing in and where those competition risks that they're discussing might, you know, currently?
28:00 - 28:39
Joanna Travalini: Sure, so that joint statement is really the culmination of many years of these various competition authorities really trying to understand the potential risks of AI to competition. There were three main risks that were highlighted and just to go over them quickly, a concern about a small number of companies that will control key AI inputs, like chips, for example, or the specialist technical expertise that's involved with AI. Another one is the risk of large digital firms that already have a lot of the market power in this space that will try and control AI distribution . . .
28:39 - 29:21
Joanna Travalini: channels going forward. And then the final one is concerning partnerships and other companies that are already in financial investment, have financial investments between them, really trying to steer how the market operates in terms of AI. There are also some consumer risks that have been identified, which is important. Obviously unfair use of consumer data, privacy related issues, potentially exposing sensitive information for customers. So I think a lot of this goes towards the importance of, which I think we'll get into in a bit, but making sure that companies have a general awareness of how AI is being used . . .
29:21 - 29:42
Joanna Travalini: in their supply chains, understanding the ways that AI is integrated into their own business and maybe even more broadly the industry in which they operate, but also making sure that safeguards are being implemented to protect not just at the business and company level, but also to the end consumer level, because there are a lot of concerns about competitive risk as it relates there.
29:42 - 30:17
Nicole Wells: Yep, and that dovetails very nicely into the last topic we wanna cover, and I guess more regulation or proactive approaches by the government, but just yesterday the DOJ did announce updates to its corporate compliance guidance to include an evaluation of how companies are assessing and managing the risk related to new technologies such as AI in their business and compliance program. So Jessica, do you wanna briefly overview those key updates that are hot off the press as of yesterday?
30:17 - 31:00
Jessica Magee: Yeah, it's getting more expensive and complex to be a compliant company. God bless the compliance officers and organizations. So that's right, yesterday the guidance came out, DOJ issued a statement and in fact updated the policy materials. So the evaluation of corporate compliance program really looks exactly at that. The compliance program in an entity under investigation in connection with discretion, prosecutorial discretion, declination of any action or how an action might be charged or penalties assessed. And as you all know, it's based on three guiding principles, whether the program is well designed, whether it is carried out earnestly in good faith, and whether it actually works, whether . . .
31:00 - 31:40
Jessica Magee: it works in practice. And so yesterday's statement was about new AI guidance. These are not prescriptive requirements, but it's a notice to the world, to folks like us, and to our clients of what the department will be looking at when assessing a compliance program. They also talked about executive comp callback and whistleblower program. As to AI though, I think they provided some very good rational and common sense guidance that exactly as Jerome you've touched on, they want to see that companies sort of know thyself, right? Understand how, whether you're using AI, how you're using AI, who's . . .
31:40 - 32:16
Jessica Magee: using AI, and how it's being used. Did I say how it's being used twice? If I did, we're just gonna go right past that. It's just doubly important. So the human involvement is really important. Human eyes, human ears, human brains on how either AI is being built and deployed proprietary within an entity, whether it's used by its compliance program or how compliance is sort of policing the beat of AI used in the business operation. And so, sort of thinking forward, okay, how does a company take this guidance, which is really focused on risk identification, risk mitigation, . . .
32:16 - 32:52
Jessica Magee: and then who's governing and aware of that risk, right? This sounds like the guidance we've been getting from DOJ, from SEC and others. Thinking ahead to what do I do with that as a company, as someone who counsels companies, it's what you've always done. Be thoughtful, be intentional, have the right people at the table, having the right conversation that have access to resources to effect the necessary change so that when a compliance program is under scrutiny, you can demonstrate, if not perfect, an understanding of the guidance, a movement in the right direction, an investment of resources. . . .
32:52 - 33:18
Jessica Magee: Jeremiah made a great point before that it's also the size and scale of your company, the sector of your company. Are you using AI to sort of filter spam emails? Are you using it to identify who you're hiring, who you're firing, what investment advice you're giving, etc. Companies need to understand that risk. There's no pass for I didn't know, we just let the thing go wild. There will be zero tolerance for that.
33:19 - 33:58
Jerome Tomas: So real quickly, I saw this and I think everyone saw this a little bit with the new SEC cyber rules about a year ago when companies were preparing to get ready for their effectiveness and you had legal departments trying to understand how they were going to take mass information maintained by the information security team and distill it in a way that was understandable for purposes of determining whether they had then or in the future an 8K disclosable material cybersecurity incident as well as the controls and procedures around cybersecurity governance. I think that was the canary in . . .
33:58 - 34:34
Jerome Tomas: the coal mine of the convergence between the traditional legal department from a sort of a macro governance standpoint and those more technical components of the company. I think this rule here is going to be that on steroids. Artificial intelligence and how companies use artificial intelligence, what it means to them, what the positives are, what the negatives are, what's the company's approach to governance regarding the use of new technologies such as AI and its commercial business and compliance program. I don't know that many lawyers that are general or compliance officers that are responsible for the compliance department . . .
34:34 - 35:10
Jerome Tomas: at major companies actually know what that means in plain English and they're going to have to rely upon people that are now chief privacy officers and even more now sort of being funneled into chief AI officers to reduce that into what I call plain English, in a way that can then be explained to the management, to the board, and ultimately to the government. Because you have, this is a system that has existed in a world of hyper technical terms that doesn't make sense to a lot of normal people and they're all of a sudden saying, well . . .
35:10 - 35:19
Jerome Tomas: here you go, if you want credit for an effective compliance program you actually have to do all these things. Well somebody's going to have to actually be able to answer, have we done it?
35:19 - 35:41
Jessica Magee: Right. And whether it's DOJ or SEC, you're going to have someone come along and say, okay, well, who in legal was talking to who in operations was talking to who in privacy? Wait, we're meant to speak to one another? I need to go sit in their office and understand what they do every day and they have to explain that to me in plain English in a way that my Luddite legal brain can understand so that I can then partner with other people to write a policy that my C-suite's going to buy off on because it does . . .
35:41 - 35:55
Jessica Magee: not meet their entrepreneurial appetite. So you can see, I think, where the enforcement's going to go in the next maybe 12, 18 months. Great disclosure controls and procedures cases from breakdowns of people not talking to each other.
35:56 - 36:14
Nicole Wells: So moving, following that, Jeremiah, just in light of these hot off the press compliance guidelines and what you've already been discussing with you know your own clients what have you observed and advised in terms of compliance frameworks guidelines what advice have you been giving to your clients?
36:14 - 36:50
Jeremiah Williams: Yeah A lot of it's common sense stuff, but looking at policies and procedures, thinking about disclosures. We talked about disclosures earlier, like what are you saying about AI, are you doing what you're saying, testing, making sure you have human beings actually looking at these tools to see how they're being used. We talked before about having different people within an organization talking to each other, having to make sure that communication is happening. I think that's an important part about this as well. All of this also can kind of fall under, you know, just an umbrella of like . . .
36:50 - 37:15
Jeremiah Williams: an AI audit, like having someone come in and just look at your AI process from a compliance standpoint, from a technical standpoint, and be able to make sure that this is looking as it should. It's a challenge because this is very new and a lot of people involved in this don't really understand underlying technology. But just doing those things will at least get you in the right track, I think.
37:15 - 37:16
Jessica Magee: Yeah, That's great. Joanna, anything more to add?
37:16 - 37:52
Joanna Travalini: Yeah, and I'll just add that I think the term AI means something different to everybody. It's so commonly used now. So the only other thing I would add is when we think about disclosures, not just in terms of public filings per se, but also disclosures that you're putting out there on a company website, for example, about how you're using AI, recognizing that the audience that's receiving that message may be interpreting what you mean by AI in a completely different way than you intended. So, you know, doesn't have to get all the way to the public disclosure point, but any form of marketing or public statement that you're making just to be, keep that in the back of your mind.
37:53 - 38:23
Nicole Wells: Great. So turning from proactive to reactive, and just touching upon here briefly in our final six minutes, using AI in investigations, Let's talk just briefly about the SEC's use of AI tools, which are not new to regulators. Certainly they've long employed certain data analytic tools that many were aware of in this room. But Jeremiah, can you tell us some of the ways the SEC is utilizing AI currently and kind of what their current approach to this is?
38:24 - 39:01
Jeremiah Williams: So the SEC really just this month has started to formalize its approach to AI. There was the Biden administration sent a memo to federal agencies for a series of questions about how they're handling AI. And so the SEC's response is actually on this website. They have designated a chief artificial intelligence officer who happens to be David Bottom, who's also the chief information officer for the SEC. There's a working group at the SEC that is organizing the agency's response and the way this works at a high level is that each division is going to have to make . . .
39:01 - 39:43
Jeremiah Williams: a use case. So each enforcement will go and say, this is how we plan to use AI, and that will have to be evaluated, have to be certified annually. And so the SEC is in what seems to be the early stages of a formal process in which it's gonna be internally vetting its use of AI. Now, separate question I guess is, well, that's all well and good, but does that mean they're not doing it already now? And it's unclear, they haven't been public about this, but I think they probably are. I mean the SEC has been using data for more than the decade and big data, so I think this is probably just codifying and formalizing a process and techs that they've been using.
39:44 - 40:09
Nicole Wells: Great, yep, and if we, so Jessica, if we kind of turn the table on our own experience on conducting internal investigations, how are you seeing AI, whether, you know, probably more in the traditional sense of AI, with tools and regression analysis and data analytics and the like, being used in internal investigations. And second question, how receptive have the government agencies been to using AI tools?
40:09 - 40:49
Jessica Magee: The government is always receptive to new ideas and new ways of doing things. So I'd say open arms. So I think your point about AI means different things to different people is a great point. It's not a monolith, right? You really need to talk about what you mean and what programs or methods you're utilizing. But I think many of us have been using technology assisted review that uses and incorporates artificial intelligence functionality into it, like brain space in our internal investigations. During Brad Mroski's panel earlier, they were talking about, I think, every internal investigation where they've . . .
40:49 - 41:24
Jessica Magee: got a bullet deadline, they need to get something done and you've got a bazillion documents to review. I think that, you know, I've had better experience, I wouldn't say perfect experience, socializing that to staff where you might have a self-report or it's going, you know, you're in a regulatory investigation, I think it can be incredibly helpful because it saves time and it provides greater confidence than human error through a quality control process. So yes, that's limited to sort of large amounts of word-based data, but I think you can get to positivity, negativity, confidence levels, and I've . . .
41:24 - 41:55
Jessica Magee: had some success getting the staff on board with that, but you really have to show them your work, and that means showing them, which I think a lot of people are doing more in traditional investigations as well, things like search terms and how you made the sausage to get to a point that they should be able to rely on a tool that you used that did incorporate algorithmic learning, data, scraping, et cetera. I would say to Jeremiah's point, you know, the SEC has been doing this for a long time and good for them and they should . . .
41:55 - 42:26
Jessica Magee: be permitted to, of course, but it's also part of the stump speech when they are out talking about the work they're doing. So I think just as they want to know what our clients are doing and how they're doing it with AI, so too should that be fair game in discovery. And so I think the defense bar should be more willing during litigation to be serving preservation notices, discovery requests to understand how the Division of Economic Risk and Analysis, how this economist, how this staff accountant, for instance, used a tool that utilizes AI, how they kept . . .
42:26 - 42:46
Jessica Magee: their records because that forms part of the investigative file. I can't imagine that's a conversation people are gonna love having, but I think it's a conversation we as defense counsel need to start having to be credible litigation threats as they're using those tools to sometimes now judge our clients for using those tools.
42:46 - 43:15
Jerome Tomas: Yeah, real quickly, we've used AI tools, or have seen AI tools be used in analyzing disclosures, in analyzing information for purposes of making notifications in data breaches. And what I can say is that AI works much better and AI tools are much better when there's a mathematical similarity between all, I'm like, this is the no-duh statement of the day, right? Like AI works better when the data is consistent.
43:16 - 43:17
Jessica Magee: And true, yeah.
Jerome Tomas: AI works less better when their information that you're looking for is in even slightly disparate places throughout a document or throughout a source of data. So even if you're using these tools, I've seen it firsthand that the results tend to be spotty the more inconsistent the data is. And that's somewhat counterintuitive to what we're all hearing with AI, right? It's supposed to be this amazing thing that's gonna replace humans. And right now it's nowhere near anything more than what I would say probably is a first year associate.
43:58 - 44:04
Nicole Wells: And to wrap it up, so Joanna, what do you see the biggest risk being in the use of AI and conducting investigation?
44:04 - 44:29
Joanna Travalini: I mean, I think we're just at the infancy, right? I mean, when I think about it now, I know that there are a lot of law firms that maybe have already rolled things out, certainly a lot that are trying to internally develop the best practices on how to use it forward. I think in terms of, from outside counsel perspective and in-house counsel, we obviously have ethical obligations that we need to make sure that we comply with. So I think that as we think about the use of AI, I don't think it's going to trump the human . . .
44:29 - 44:52
Joanna Travalini: brain. I don't think it's going to override the need for a human to be conducting a lot of the work that we do. But we are gonna find ways, I think, for it to enhance and accelerate the work that we do. I think some of the risks are gonna become walking the fine line between taking advantage of this enhanced technology and still making sure we comply with our ethical obligations.
44:52 - 45:02
Nicole Wells: It's a perfect summary. We probably could have had another 45 minutes, but with that we will conclude. Thank you all so much. That was fantastic and thank you everyone. Thank you. It's great.