
Season 4 Episode 5
Read full transcript
Lauren Jenkins: Welcome to Prosperity at Work from JP Morgan Workplace Solutions, the podcast all about equity compensation, financial wellbeing, and more. I’m Lauren Jenkins, head of Executive Participant Servicing at Workplace Solutions.
Chris Dohrmann: And I’m Chris Dohrmann, director of Strategic Partnerships at Workplace Solutions. Today we’re gonna talk about AI but isn’t everyone.
Lauren Jenkins: Hey, it’s a fact of life, Chris. It’s used everywhere from movies to health tech to transportation logistics, and it’s even driving cabs in some cities. AI is here to stay. But today we’re gonna talk about the impact AI is likely going to have on equity compensation, benefits, HR, and all that good stuff in the coming year.
Chris Dohrmann: And I think we’ve got the perfect guest here with us today for a deep dive. We’re joined by Charese Smiley, executive director, AI research director, natural Language Processing at JP Morgan. Charese, welcome to the show.
Charese Smiley: Thank you so much.
Lauren Jenkins: Just like me, many of our listeners won’t be experts on AI. So can we start with you telling us what you do for a living? Maybe just the CliffNotes version.
Charese Smiley: As you mentioned, I work in natural language processing, so it’s called NLP, and it’s a subset of artificial intelligence. The natural language part is just about how we handle, or work with human languages, and then the processing has to do with computers and getting computers to do things with natural language. So, what I do day to day is try to get computers to understand text and even generate text, which is what I think a lot of people are being exposed to now, with the rise of LLMs.
Charese Smiley: Within the context of the bank, we try to see whether or not we could apply NLP techniques to financial documents, so we go around and we meet with different business teams within the organization who might be wanting to use AI to automate some part of their workflow or some part of the process that they have. We try to see if it’s something that we could work on together with them, and we collect their data, we analyze it, we build models to scale for them, and then we continue to iterate with those business partners until we find a solution that they’re happy with. So that’s one part of my job.
Charese Smiley: And then the other part of my job is being a public facing researcher, and that part is really exciting. So I and the rest of my team, we get to work on problems within financial NLP. We often write papers and submit them to conferences and journals. And the types of venues that we would target would be the same ones that anybody at an academic institution would also submit their work to. We give talks, we organize workshops and tutorials, and we try to fully participate within the research community as a whole and try to keep JP Morgan as the best bank for AI adoption as it is right now.
Chris Dohrmann: One thing that we do in IT and finance is we use a lot of acronyms. So I just wanna make sure that the audience knows. An LLM is a large language model. And one of our big predictions where we were talking about 2025 and what we would predict is that AI will have a huge impact on the workforce and compensation and equity compensation in particular. One area that comes to mind is in, when we’re talking about that, is around data security. What are the risks that people should be aware of with LLMs?
Charese Smiley: I think one thing that I think people forget, I tend to forget often, is that the data has to go somewhere. And one of the things I like to bring up is when you send a text message or you send an email message or something, you might be well aware that okay, there’s a copy of the message on my phone or on my system, and there’s a copy, if I’m sending a text message to you, you have a copy of it. But we kind of forget that also the company that we’re using to send these messages back and forth, they also have a copy of that data.
Charese Smiley: So when we interact with an LLM, a large language model, a chatbot, we also have to be aware that there’s a company on the other side who is receiving the data that we’re sending in and sending us back responses. And so we need to be very careful when we’re uploading proprietary company documents into a chatbot or into an LLM because it is going out to a company somewhere, whichever one that we’re working with. And then we become subject to how they maintain their data.
Charese Smiley: So if they get a data breach, it could leak out our company proprietary data, or if they have that data and they use it to do their iterating, to train another version, the next version of the model that they build that comes out, then it could leak to other competitors as well, when they ask questions about what our company is doing. So that’s not to say that you shouldn’t use LLMs at all, you should just use them within whatever your company’s guidelines are, so if you have one in-house that you know is specifically built and tailored for your company and that your company allows you to use, then you can upload the data there and you can interact with it there, but never taking it outside of the company’s data system.
Charese Smiley: And then if you… I imagine many HR professionals have to work with very, very sensitive data, you might even be aware that within your company walls, you may not want to upload very sensitive data, because if it’s used to retrain the model even in-house, then people within the company who may not need to have access to that data could get access. So just make sure that you understand very well what the guidelines are around the data that you have within your company.
Lauren Jenkins: So let’s give our listeners a little bit of insight on the basics using LLMs as an assistant. What capabilities does AI have as far as reading or analyzing documents? That seems like something our audience could benefit from if they aren’t already using it.
Charese Smiley: I suppose that there are like, actually two different modes you could use. One would just be to ask questions directly or interact with the model directly, and then you would just get information back based on the data that it was trained on, so this could be anything from the internet. But I think where it’s really been shining for me is, having that ability to upload and search through very long documents for information and be able to ask those different questions about it, or you could upload multiple documents and be able to search across those documents.
Charese Smiley: You could ask for it to tell you the difference between two documents, so I have an older version and a newer version, could you tell me what’s changed between these two documents, which in the past would’ve been something, very tedious. I think on the other side, I really like it for brainstorming and trying to think about different topics, at least for me, when it comes to writing, staring at that blank screen and, trying to think about what it is I wanna write, that’s usually the most daunting part.
Charese Smiley: So you can ask it to help you brainstorm on a topic, give you an outline to start with that can help kickstart your writing on anything that you’re working on. You can ask it to give you arguments from a different perspective than the ones that you’re currently writing about. And I like to ask it to critique my own writing or to review things that I’ve written and say what would you change about this? Could you tell me if there’s anything missing? Have I overlooked anything? Or where can I make thing a bit more clearer? And I’ve found that from an assistant perspective it really shines in that way.
Chris Dohrmann: Many of our listeners will be familiar with, you brought up the fact that it can review large documents for you. And many of our listeners will probably be aware of the fact that HR departments are already using it to analyze resumes, but that brings up another issue or area of concern or one of the risks involved. Tell us about bias and how bias can come into this.
Charese Smiley: Yeah. Bias is a a huge topic when we think about AI and then especially with the rise of LLMs and people using LLMs on a day-to-day basis. I think one thing to consider is the way that these models have been trained. A lot of them have used vast amount of data from the internet as a part of their training, and while it’s made them very, very powerful tools, we have to understand that this data is historic in nature. All data is historical in nature, right? So it becomes a snapshot of a point in time in history, or a point in time in society, and with all the good, bad, and the ugly that data from the internet has within it. But thinking about that snapshot, it may not be exactly what we wanted, when we were thinking about from a hiring perspective, for example.
Charese Smiley: So while we may want it to just focus on the characteristics of an individual job seeker, such as what their skills are, what their training is, what their experience is, it may be learning broader patterns in society such as if the person is applying for civil engineering position that okay, maybe it was 80% male, 20% female, just throwing some numbers out there, and it may seek to replicate that in the future, with data that you’re putting in there. If you’re asking it to score or to screen a resume, it may then inadvertently score a resume with a male sounding name higher than one with a female sounding name, just because it’s learned that there’s some sort of pattern in society where males are typically more hired or have historically had a particular position more often.
Charese Smiley: And then it just kind of trickles down. Like say you say, okay, I’m gonna take all the names off of the resumes before I upload them, it may still be able to learn things from the person’s address because address information, especially in the United States, are heavily attached to different demographics, so it may learn demographic data just based off of where you live. Say I take that off, it may learn things about me from the clubs and the associations that I have listed on my resume, such as, oh, was I a member of the Society of Women Engineers? Did I go to the Grace Hopper Conference for Women in Computing? Was I in a certain sorority or fraternity?
Charese Smiley: Was I an Eagle Scout? Which until recently meant I was a male and it may still score me on those things. And then even if I could take all of those things out, it may learn something just about the patterns and the ways that different people talk that may be associated with their own demographic background, and it may apply that to the way that it rates the resumes. So I’m not against actually using these as a tool to help with screening, but I think that we need to test systems very carefully to make sure that we’re not injecting unwanted bias into our hiring process.
Lauren Jenkins: Another big trend this year is likely to be pay transparency with state level laws in places like New York becoming more of the norm. How should companies be thinking about leveraging AI to deal with these potentially vast amounts of data?
Charese Smiley: So as we mentioned, AI is a very powerful tool, and we’re able to input all kinds of data into it. So we talked about things like experience and job skills, location and salaries. And you know, while we talked a little bit about bias, it can also help us to identify disparities and it can help us to benchmark against market trends, which will help maintain a competitive advantage. So I think ultimately AI can empower companies to make more educated, informed decisions than they would make without it.
Chris Dohrmann: I’ve heard you speak before about the creative aspect of working with an LLM and that’s near and dear to my heart as I do have a technical degree, but I have a liberal arts major. So specifically how to write prompts for an LLM. Can you give us a quick guide so that our listeners can get some kind of insight into that aspect of it?
Charese Smiley: Sure. I think the more detailed we can be when we write a prompt, the better outcome you would get in the response that you use. So one technique that is often used is to create a role or a persona that you have the LLM respond to. So you could say something like, imagine you are an HR manager at a Fortune 500 company or something like that to kind of kickstart off the prompt and then tell it exactly what you want it to do. So you could be really specific about the type of output that you’re looking for, you could say something like, be concise or be detailed, or I wanna have three paragraphs about this topic, or could you give me the top 10 bullet points.
Charese Smiley: Another way that we often interact with these is to provide examples. So you could provide different examples of different inputs and outputs that you would like to see. So kind of a high level example might be for a translation exercise where I give it, English sentences and French outputs, right? French sentences that I want for the translation, and I give it a certain style that I’m looking for. But basically I’m saying, “Okay, here are three different examples of the type of output that I would like you to give me.”
Charese Smiley: And it does a pretty good job of following along with that. But that could translate to something like “okay, here are some emails that I’ve written in the past. Could you write something in the same style?” “Here’s a document that I’ve written in the past. Could you write something in the same style?” And it does a good job of mimicking the type of examples that you give to it.
Charese Smiley: And then if you go in a different direction and you wanna work on more of a reasoning task, you could actually ask it to think step by step. So you could say, provide me your logic for answering the solution step by step, and it’ll show you how it’s working through a particular problem and give you all of those details, one by one, and that can be really interesting to see.
Charese Smiley: It can also help you to pinpoint if you feel like the output from the LLM is going wrong and you don’t know why it’s not giving you the right answer, you could ask it to provide it again, step by step, and you might be able to see where the logic is going wrong and understand a little bit better about the response that you got, or we’ve seen examples where it actually corrects itself after you ask it to go step by step, then it’ll say like, oh, but actually this and then it’ll give you the correct answer at the end. So that’s another way that you could prompt your LLM to get a better response.
Lauren Jenkins: Very helpful tips. Charese, thank you so much for joining us today.
Charese Smiley: Yeah, thank you very much for having me.
[music]
Lauren Jenkins: Next up, we’re excited to welcome Travis Dingledine, executive director, workplace product director. Travis, welcome to the show.
Travis Dingledine: Thank you for having me. Glad to be here.
Chris Dohrmann: So Travis, in what we call our quick fire round, we’d love to hear more about how we are thinking about AI here at Workplace Solutions. Let’s start with an easy one. Can you tell us about your role and how it relates to this episode’s AI topic?
Travis Dingledine: Yeah, so I oversee a number of product areas, for workplace solutions. And my teams are using AI every single day to shape the products that we’re building for our customers. So whether that’s incorporating design thinking principles and using recordings and anecdotes of customer feedback to translate that into jobs to be done so that we ensure that we’re solving real life problems, with the tools that we’re creating for our customers, even to, our engineers are using artificial intelligence to help draft code to really increase our delivery pace, so that way, we can go to market with commercial features at a much, much faster pace and deliver exciting experiences for our customers. So we’re really integrating it in our day-to-day here in, Workplace Solutions.
Lauren Jenkins: At a high level, how much focus would you say is being put on the use of AI in Workplace Solutions here at JP Morgan?
Travis Dingledine: Well, JP Morgan’s done something pretty incredible, and has democratized the use of LLMs as an assistant for every employee. So every single employee of Workplace Solutions has their own fully compliant, fully private version of an AI assistant that you can have on your desktop all day, every day to help you improve communication, to tackle a lot of that no joy type of work that we all kind of wish would be off of our plate, improving processes, et cetera. And it’s just sort of like a new way to work to empower yourself and sort of supercharge your own efficiency throughout the day and frankly, efficacy in many cases.
Travis Dingledine: And then there’s obviously business specific use cases as well, so we’re on the precipice of really incorporating AI in some of the ways that we change the way that we present our business to our customers as well, so it just feels like we’re on this… At this sort of a transformative, inflection point, which is, really exciting to be a part of and to shepherd.
Chris Dohrmann: So, Travis, what do you think are the biggest AI related challenges that you expect this year?
Travis Dingledine: Well, last year was a pretty interesting year in the sense of, and I have to say, I’ve been at JP Morgan for quite a long time, and I’m very impressed and proud of the way that we’ve leaned into AI as the future of our business. And so the work that’s been done so quickly to create toolkits internally so that we can actually use AI again compliantly rather than being really scared of it and sort of tiptoe around it, we’ve done so much of that hard work already.
Travis Dingledine: So actually the hardest part right now is just adoption, which isn’t, the most exciting answer perhaps but we’re at this point where using AI all day, every day is not quite a muscle for every person yet, and there’s just a little bit of a period of time here where we need to get over a hump where this becomes a reflex in that you’re actually augmenting yourself, in your day to day versus sort of just not quite sure how to use it just yet.
Travis Dingledine: And then I think the other thing, that’s really cool is that 2025 is the year that we start to roll out productionized use cases so that we’re actually incorporating AI in the way that we do business rather than kind of being in this place where we have all these really cool POCs, but it hasn’t quite transformed the business yet. So adopting those new tools and frankly just changing the way that you work is what we’re trying to tackle, going into 2025.
Lauren Jenkins: Focusing on one specific example, can you talk about the success we’ve found using a large language model to better equip our service teams to support participant inquiries?
Travis Dingledine: Yeah, so this is one of those business specific use cases where we’ve built a workplace copilot for our service teams and one of the things that we’re really well known for is the human touch of service, as you know, Lauren. But you can imagine with all the different customers that we have, it’s so hard to know every single nuance and detail, ’cause it is literally different for every single client. And then to apply it to an individual employee who’s live on the phone, with questions about their situation within the context of their specific company. So we’ve leaned into sort of the bread and butter of LLMs, which is synthesis.
Travis Dingledine: And so now our service agents are going to have at their fingertips, when an employee calls in with a question access to all of the information related to that employee and that company served on a platter for them. So that way, they can quickly answer and very accurately answer service requests. And again, that just is gonna continue to be something that’s really important for our business is that we have all these great digital platforms, but that human element in terms of service and customer relationships is really, really important and we’re only gonna get better at it from here.
Lauren Jenkins: Definitely. I can certainly attest that this is an efficiency game changer.
Chris Dohrmann: Last question, Travis. Can you give us a sneak peek to any exciting concepts you are currently researching?
Travis Dingledine: Sure. There’s a lot of really interesting things happening behind the scenes in terms of proofs of concepts and really cool ideas that we’re working on. One that I think is really interesting is we offer services to executives and executives have a really kind of difficult problem in terms of the equity that they’re compensated with and their ability to sell for other needs and goals in their life. And oftentimes, that results in them having to create a 12 month sales plan, looking out into the future and is often kind of transactional in nature. But what’s really happening is that they’re selling their stock to solve a goal they have in real life with their family’s financial picture, whether that’s long term diversification, a short term cash need, et cetera.
Travis Dingledine: And so what we’re using is, natural language processing to be able to input the client’s intent, so the real goal that they’re trying to achieve with their balance sheet in real life with the sort of transactional nature of the holdings that they have and ultimately creating a trading schedule. And so this has been really fun to work on because not only does it help us scale this sort of subject matter expertise or this service across a wide range of advisors who may or may not be super familiar with how to do this, but it ensures that we’re leading with the human element. These are not hedge funds, these are people and they have real life financial goals that we’re trying to solve for.
Travis Dingledine: And so in addition to the scale, it just makes sure that we’re always putting the client’s goals first, so that way ultimately when we do transact, the outcome’s fully aligned with what they’re trying to achieve. And so that’s kind of neat because it’s bridging this gap between the efficiency gains that we’re seeing now in LLMs with synthesis, processes, et cetera, but bringing that human element in the way that we give advice can ultimately be a first baby step in being transformative in the way that we provide wealth management services to our customers.
Travis Dingledine: All of these are really, fun to work on and you hear all of these concerns about AI sort of limiting human creativity or taking place of human creativity, but what I’m actually seeing here is creativity blossoming and ideas can come from anyone and anywhere and any person can even incubate them on their own with these tools. So I think this is a really exciting time for us, as a business and also just when, even just think about your own fulfillment as an employee and building a business, you’re just so much more empowered these days with these AI tools.
Lauren Jenkins: Totally agree. Travis, we really appreciate you joining us and sharing your insights today.
Travis Dingledine: Thank you very much.
[music]
Chris Dohrmann: And that’s it for this episode of Prosperity at Work from JP Morgan Workplace Solutions.
Lauren Jenkins: And as always, if you’ve made it this far, thanks for listening. If you enjoyed this episode, we hope you’ll review, rate, and subscribe to JP Morgan Workplace Solutions Prosperity at Work, wherever you get your podcasts.
Chris Dohrmann: You can find more insights on equity compensation, financial wellness, and more by following us on LinkedIn or over at globalshares.com, where you can also download our new global equity compensation survey report.
Lauren Jenkins: Until next time, that’s Prosperity at Work. Bye.
Chris Dohrmann: Bye.
Lauren Jenkins: Information provided in this podcast is intended for informational and educational purposes only, and may contain views which differ from the views of JP Morgan Chase & Company. For specific guidance on how this information should be applied to your situation, you should consult a qualified professional. For full details, see the show notes on your podcast player right now.
Chris Dohrmann: The Prosperity Work Podcast is produced by DustPod.io with JP Morgan Workplace Solutions.
What impact will A.I. have on employee equity compensation plan management in 2025? At the risk
of spoilers, big changes are coming down the line.
Guests Charese Smiley, Executive Director, Artificial Intelligence Research, and Travis Dingledine,
Executive Director in Product, Workplace Solutions, join Chris and Lauren on our first episode of
the new year to fill us in on how A.I. might change the game this year, including:
- Ways to use A.I. as an admin support
- Large language models (LLMs) supporting customer service
- A.I. as a helper to execs managing their comp
- Data security and A.I.
Information provided in this podcast is intended for informational and educational purposes only. Guests on the Own Up podcast may not be affiliated with JP Morgan Chase & Co. The podcast contains the views of a JP Morgan employee, which may differ from the views of JP Morgan Chase & Co., its affiliates and employees. The views and strategies described may not be appropriate for everyone. Certain information was obtained from sources we believe are reliable, but we cannot verify the accuracy of the content and we accept no responsibility for any direct or consequential losses arising from its use. You should carefully consider your needs and objectives before making any decisions. For specific guidance on how this information should be applied to your situation, you should consult a qualified professional.
This publication contains general information only and J.P. Morgan Workplace Solutions is not, through this article, issuing any advice, be it legal, financial, tax-related, business-related, professional or other. J.P. Morgan Workplace Solutions’ Insights is not a substitute for professional advice and should not be used as such. J.P. Morgan Workplace Solutions does not assume any liability for reliance on the information provided herein.
Host

Chris Dohrmann
Strategic Partnerships,
J.P. Morgan Workplace Solutions

Lauren Jenkins
Head of Executive Participant Servicing,
J.P. Morgan Workplace Solutions