LLM prompting is part art, part science.
You can get pretty far with a plug-and-play model. Just grab a proven prompt, drop it into ChatGPT, Claude, or your LLM of choice, and get a clean, usable answer. It’s a great option for rewriting emails or conducting research.
But finance teams don’t just send emails all day. They drill into variances and put together board decks. They craft stories based on trends they’re seeing in the data. That kind of work requires context—something that you can’t capture in a generic prompt.
So how do you give LLMs the context they need? And how do you fit the outputs into real workflows without blowing up your processes or putting data at risk?
These are some of the questions we unpacked in our recent webinar, The Art of AI Prompting in Finance, featuring Stephen Hedlund (Rillet) and Craig Thompson (Aleph).
{callout}
TL;DR
- AI didn’t gain traction in finance until LLMs improved and leaders saw other functions realizing value from them.
- Don’t fixate on “the best model.” Focus on how to incorporate LLMs into your workflows. The models tend to leapfrog each other.
- The biggest unlocks come from pairing well-crafted prompts with business-specific context.
- ROI isn’t just time saved. It’s leverage that compounds across decisions, workflows, and teams.
{/callout}
Why did it take so long to bring AI into finance?
It’s not because finance pros are less tech-savvy than marketers or customer service teams, even though those functions got a head start on AI adoption.
There were very real blockers:
But as models improved—and finance leaders saw their peers in other departments getting real leverage from AI tools—the urgency ramped up quickly. They didn’t want to miss the party.
January of last year is when AI in finance like really kicked off. You had a lot of finance leaders look at their kind of budgets from 2024 and go, woah. We're spending on things like Cursor and Windsurf for my developers. We're using AI for our customer service teams. We're using AI for our legal teams with things like Harvey, and they're like, where's mine? And that's what has really accelerated this trend.
The question now isn’t whether to use AI. It’s how to make it useful without disrupting your workflows or putting sensitive data at risk.
How LLMs fit into finance workflows
Your coworker tells you about how ChatGPT saved him hours of competitive research. Another chews your ear off about how Gemini writes all her emails. Meanwhile, your X feed is filled with people claiming Claude Code is their new full-time assistant.
So…which model reigns supreme?
It's a fun question to debate, but not a useful one to grind over. Even if one model is technically better today, the likelihood that it will be leapfrogged in the near future is high.
All of the leading models (ChatGPT, Gemini, Claude) are extremely powerful. Your time is better spent thinking about how you can leverage them in your workflows rather than choosing the “right” one.
During the webinar, Craig demoed an all-purpose use case that’s applicable to finance and beyond:
This one I use all the time when I am, usually, I drop off my my daughter at school in the mornings, and I have my fifteen minute car ride where I'm going back to my house trying to figure out, like, what are, you know, thoughts I wanna organize? And the idea organizer is really great. This is available basically across all of your favorite LLMs. I I think, like, ChatGPT's, like, interactive mode is pretty good. I'll just turn it on, and I'll be driving, and I'll talk about things that I want to organize. And ChatGPT will listen and have a full conversation. And at the end of the conversation, it'll write out for me in text a more distilled format, whether that's a way I wanna communicate a particular concept, whether it's a Slack message I wanna write or an email. It's really good at distilling kind of like free form thinking into much more actionable kind of like executive level commentary.
This is just one of 15 finance-vetted prompts included in our finance AI prompt guide.
Something like an Idea Organizer is just the tip of the iceberg, though. Pairing well-crafted prompts with the right context is how you unlock LLMs’ real superpowers.
That’s the idea behind Model Context Protocol (MCP): a lightweight but powerful way to prime models with background on your role, tools, workflows, and preferences so you don’t have to start from scratch every time.
In Rillet’s case, MCP works by automatically passing relevant context from the ERP—like your environment, team structure, or systems setup—directly into the model. So instead of typing out explanations like “we use a custom chart of accounts” or “this workflow runs through Bill.com,” the model already knows.
With that context in place, the model transforms from generic chatbot into an assistant that knows the ins and outs of your business, and can guide you through problems in real time.
Stephen shared a great example of this during the webinar:
My point is here, what's great about this is, like, as I'm trying to figure out how to connect my Rillet MCP to Claude, I can just ask Claude directly. I don't need to go to a separate instance. I don't need to open ChatGPT or separate window. I just do it all here, and it literally helps me troubleshoot. Right? Like, it's telling me, hey. Open up, you know, this in Mac. I don't know how to do this. Right? I'm not a developer. How do I find this? So then I ask, like, okay. How do I find this? Right? Say I have Claude Desktop. How do I do two point one? It says open finder. Go to command shift g. So, like, it's teaching me step by step how to do these things. This is what I love about how these LLMs are set up.
Craig summed up the importance of context well:
The kind of two principles here I'd take away from this is one, like, the context you give to it matters a lot. Right? Like, the more context you can give to it, the better. And then second, like, it still needs work. Right? You should never expect it to come on perfect, but that doesn't mean you just stop. Right? That doesn't mean you stop using it. Like, there's still a use case here even if the output isn't isn't perfect from the get go.
Can I trust LLMs like ChatGPT and Claude with my data?
Finance teams dipping their toes into AI almost always start with some version of the same question:
“If I’m going to give these models valuable context about my business, how can I be sure that my data is going to be kept safe and used responsibly?”
It’s a valid concern that Stephen and Craig both spoke to directly.
How safe is it to share company data on these tools? There's a couple different ways to think about it is one, obviously, all of these tools have enterprise agreements where at least they say they don't use your data to train their models. The the other way that I've seen folks do is they'll actually host a LLM locally. In that case, like, you actually need an engineer at least to help you stand it up and put it together. So it's not always something you can do at at smaller companies. But if, like, you're very concerned about privacy there and that's the best way to do it, The other way you can think about it is like they probably have access to all this stuff anyway. So even if you're uploading stuff, it may know those things already, so you might be okay there. Yeah. Drew, I think like, honestly, I think that enterprise agreement is probably the biggest legal protection you have is if you're loading company data, make sure you're doing it not as a personal user, but make sure it's kind of like in a company workspace
But data privacy is just one piece of good data governance. You also need to trust that the model’s answers are accurate. That’s an inherent limitation of generic LLMs that tools like Rillet and Aleph solve for.
Like, this is where this, like, validation piece actually becomes really important is a vendor like Rillet, for example, or Aleph, like, will provide backlinks to everything so you can go through and validate that information yourself very easily. So there's a lot, like, deeper level of trust there. And even, like, with models changing and updating as well all the time, I do think it's really important that we as individuals, you know, can do these things ourselves. And also, like, there is a benefit to going through a vendor or model that, like, Rillet has incentive to have the best model up to date all the time, and it's gonna have people dedicated to figuring out what the best model is and how to use it. And that then becomes something you can just access through Rillet. And so there there is, like, benefit to going through vendors and also exactly to Craig's point, like, I think it's gonna keep changing, and it's gonna be a lot of us figuring out, you know, where to go.
How finance teams are reframing AI ROI
Time savings are the most straightforward ROI finance teams see from AI. But “hours saved” is a crude metric that misses much of the value these tools can unlock over time.
The more you use LLMs, the faster you get at using them, and the more use cases you uncover. Eventually, time savings are dwarfed by the complete reimagining of how your team works.
That shift has massive implications—not just for team efficiency and output, but for individual career growth, too.
Know, Steven, as you're talking about how you define ROI for a project, and even how I was talking about how many hours did it take, how many hours do you get back, I do think that one of the frameworks that I have for approaching this is there's layer one ROI, which is you spend time doing stuff today, how much time can you save if you devote the time to build something with AI that automates some of that. But there's also a level two to that, which is if you spend the time to get good at using AI to automate one thing, it will now take you less time to use AI to automate the second thing and the third thing. So there's like, if you're really struggling with ROI on getting started, I do think there's an element of the more you use it, the better you will get at it, and therefore there are more valid use cases for you to use it. The the other, you know, third order component that is, I think, really exciting, like very exciting and a little bit scary for us as kind of like finance folks is like, what does AI sophistication mean for career trajectory and longevity? And I and I do think, like, increasingly, expectations on, like, what each of us can do individually is growing for better or for worse. And so I think it's, like, one of those, you know, no time like the present to be, you know, someone who's, like, trying to think about maximizing leverage.
Whether you’re interacting with ChatGPT for the first time or vibe coding your own apps, these resources can help you get more value out of AI today: