How Is AI a Money Maker?
Soon on Wealth Systems were are launching a special series focusing on applying AI to wealth building and preservation. We’ve been getting lots of requests for this series. It’s going to be epic (and tactical).
It’s going to be exclusively for paid members.
Also, friendly reminder: We are raising the price on August 29, 2025. Going from $12/month to $17/month.
Then, we are increasing price AGAIN to $22/month on December 31, 2025.
Subscribe now to lock-in today’s rate! You have 11-days to secure the current price.
By the end of this year we will have increased price from $12 a month to $22. Lock in $12/month. Right now:
As the founder of RevSystems, I spend my days helping businesses navigate the exhilarating and confusing world of AI.
We work with clients to deploy intelligent systems across their entire operation, from customer-facing chatbots to back-of-the-house revenue operations engines. In these conversations, some questions are good, some are essential, and some are pure gold.
Recently, during a strategy session with a particularly sharp CTO, I was asked one of those golden questions. We were mapping out their AI roadmap, and after a deep dive into Retrieval-Augmented Generation (called RAG by everybody), she leaned back and asked, "This is powerful. It solves our data freshness problem. So, why would we ever need to fine-tune a model at all? Why not just get really, really good at RAG?"
I love this question.
It’s not a sign of misunderstanding, it’s a sign of engagement. She’s looking for real value in the technology, not a cool story to tell her tech friends.
It cuts through the hype and gets to the heart of a critical strategic decision that every company adopting AI will eventually face. The "RAG vs. Fine-tuning" debate is one of the most prevalent conversations in the industry today, often framed as an either/or choice.
Done right.. this isn't a rivalry. It’s a partnership. And the answer isn't about choosing a winner; it's about understanding the strategic journey of AI maturity. The ultimate goal isn't just to make a model work, but to make it work efficiently, sustainably, and in a way that is deeply embedded in the DNA of your business. Thinking you can achieve that with RAG alone is like thinking you can win a Formula 1 race with an off-the-shelf engine, no matter how good your fuel is.
To truly understand why, we need to move beyond the surface-level definitions and think about these techniques not as tools, but as distinct forms of education for your AI.
Educating Your AI: The Library vs. The University
At RevSystems, we often use an analogy to clarify the roles of RAG and fine-tuning.
P.s. if you are an AI nerd like me, you NEED to subscribe to Life in the Singularity. We talk AI exclusively over there. It’s all tech, all the time.
Back to our analogy.
Imagine your LLM, a GPT, Claude, or Llama is a brilliant, highly-educated, but inexperienced new hire. They have an incredible general knowledge base but know nothing specific about your company.
RAG is like giving this new hire access to your company’s entire library and a live internet connection.
When you ask them a question ("What were our Q2 sales figures for the Alpha Project?"), they don’t answer from memory.
They run to the library (your vector database), find the precise Q2 sales report (the relevant document), read the key passages, and then use their intelligence to synthesize an answer based only on that freshly retrieved information.
This process is transformative for several reasons:
Factual Grounding: It dramatically reduces the risk of "hallucinations" or the model making things up. The answer is tethered to a specific, verifiable source document.
Data Freshness: Your library can be updated every second. If a new sales number is recorded, the RAG system can access it immediately. The model’s core knowledge doesn’t need to change for it to use new information.
Transparency: You can ask the AI how it knows something, and it can point you to the exact document it referenced. This is crucial for compliance, validation, and building trust in the system.
RAG is about providing context. It answers the question, "What do you need to know right now to answer this query?" It’s an open-book exam.
Fine-tuning, on the other hand, is like sending that new hire to a specialized, intensive university program designed by your company.
This isn’t about giving them a document to read for a single task. This is about fundamentally altering their neural pathways, their very way of thinking, to align with your company's methods and voice. You aren't just giving them facts; you're teaching them skills.
This "university program" involves a curated curriculum (a dataset of high-quality examples) that teaches them:
Style and Tone: How to communicate with the specific voice of your brand—be it formal and professional, witty and empathetic, or deeply technical.
Implicit Knowledge: The unwritten rules of your business. How to classify a support ticket with the nuance that only a veteran employee would understand. The specific JSON format your downstream systems require, without needing to be reminded in every single prompt.
Domain-Specific Reasoning: How a lawyer at your firm analyzes a contract, not just what the contract says. How a marketer at your company develops campaign slogans, not just listing product features.
Fine-tuning is about instilling competence. It answers the question, "How should you behave and think to perform this task effectively?" It’s about internalizing knowledge so deeply it becomes instinct. The exam is now closed-book, because the knowledge is part of who they are.
Seeing the two through this lens, the CTO’s question becomes clearer. You wouldn’t hire a brilliant graduate and say, "You don't need any training, just live in the company library." You need both.
You need them to have access to the facts and to have developed the core skills and behaviors to use those facts effectively.
You need memory and training.
The RevSystems AI Maturity Model: A Three-Phase Journey
The logical next question is, "When do I use which?" This is where strategy comes in. Slapping both techniques on a problem from day one is inefficient and premature. At RevSystems, we guide our clients through a deliberate, three-phase journey that maximizes value and minimizes risk at every step.
Phase 1: Validate with Prompts and a Powerhouse LLM
The first step in any AI project is to prove the value of the use case.
Is this idea even viable?
Will it solve the intended problem?
The fastest way to answer this is to use the biggest, most powerful general-purpose model available (like GPT-5 or Gemini 2.5 Pro Experimental) and focus all your energy on sophisticated prompt engineering.
Why start here? Speed. You can go from an idea to a functional proof-of-concept in hours or days, not weeks or months. You’re testing the "what" before investing heavily in the "how."
What it looks like: You construct detailed system prompts that give the model its role, context, and instructions. For a customer service bot, you might write a 500-word prompt explaining its persona, the steps to follow, and key information about your return policy.
The Goal: The goal here is simple: validation. Does the model, when given the best possible instructions, produce a useful output? If the answer is no, then neither RAG nor fine-tuning will save the project. If the answer is yes, you have a green light to proceed.
The limitation of this phase becomes apparent quickly. The model's knowledge is static and generic, and the cost per API call on these flagship models is high. It's a fantastic sandbox, but it's not a production-ready factory.
Phase 2: Ground with RAG
Once you've validated the use case, the first and most critical step toward production is to ground the model in your business's reality.
This is where RAG makes its grand entrance.
Why this is the next step? Accuracy and trust. Before you can worry about style or cost, you must ensure the AI is providing correct, up-to-date, and verifiable information. RAG is the most direct path to achieving this.
What it looks like: We connect the LLM to your internal knowledge sources. This could be a Confluence wiki, a SharePoint document repository, your product manuals, or your CRM database. We use embedding models to convert this data into a vector representation that the AI can search through at lightning speed. Now, when a user asks about a feature in your latest product release, the model retrieves the actual release notes and bases its answer on that ground truth.
The Goal: To move from a "generally smart" system to one that is "specifically knowledgeable" about your business. You are eradicating generic answers and killing hallucinations. You are building a system that your employees and customers can actually rely on.
For many companies, the journey pauses here for a while.
A well-implemented RAG system is incredibly powerful and can solve a huge range of business problems. But as you scale, two new challenges will emerge: performance on nuanced tasks and, critically, cost.
Phase 3: Optimize and Specialize with Fine-Tuning
This is the final, and most sophisticated, phase of the journey.
It's where you move from having a capable AI tool to having a deeply integrated, cost-effective, and uniquely skilled AI asset.
This is the phase that directly answers the CTO’s question. We fine-tune not just to improve quality, but to radically optimize the entire operational stack.
The most significant lever we pull here is the move from massive, general-purpose LLMs to much smaller, specialized Small Language Models (SLM).
An SLM, like a Mistral 7B or a fine-tuned version of Llama 3 8B, is a fraction of the size of a model like GPT-4. Out of the box, it’s far less capable. But when you fine-tune it on a specific task, it can become a world-class expert in that narrow domain.
This is where the magic happens. A fine-tuned SLM can outperform its massive cousin on your specific task while delivering 20x to 50x savings on inference costs.
Let that sink in.
This isn’t an incremental improvement. We are talking about a fundamental shift in the economic model of your AI deployment. When your AI-powered feature is being used thousands or millions of times a day, this is the difference between a project with a positive ROI and one that bleeds cash.
We fine-tune to teach the SLM the skills that RAG cannot provide:
An internal contract analysis bot is fine-tuned to not just summarize clauses (which RAG can help with), but to identify risk with the specific lens your legal team uses.
A sales outreach email generator is fine-tuned to capture the exact blend of curiosity and authority that your top salesperson uses, a style that can't be found in any public document.
A data-to-text engine is fine-tuned to always produce summaries in a precise, non-negotiable JSON schema, eliminating a whole class of errors and data validation steps in your application.
This is the endgame: a system that leverages both the library and the university.
The Synergy in Action: Where RAG and Fine-Tuning Converge
The most powerful AI systems don't choose between RAG and fine-tuning. They orchestrate a beautiful dance between them. The fine-tuned model becomes the "brain" of the operation, possessing the core skills and personality, while RAG acts as its "short-term memory" feeding it the specific, real-time context it needs for the task at hand.
Deconstructing the System That Runs the World
My biggest, baddest and best article yet on using business as an engine to build wealth.
Let’s look at how this plays out in the real world, for both the front and back of the house.
Front of the House: The Superpowered Customer Support Agent
The Base Model: We take an efficient SLM and fine-tune it on thousands of transcripts from your best customer support agents.
Fine-Tuning Teaches: Empathy, your brand's specific conversational style, a patient troubleshooting methodology, and the intuition to know when to escalate an issue. It learns the skill of customer service.
RAG Provides: When a customer starts a chat, the RAG system instantly pulls their entire order history, past support tickets, and any relevant knowledge base articles for the product they own.
The Result: The fine-tuned model doesn't just give a generic answer. It says, "Hi John, I see you recently purchased the Pro-X Blender and are asking about the pulse function. I also see you had a question about the warranty last month. Let's get this sorted out. Based on the manual for your specific model, here's how the pulse function works..." This response is skillful, personalized, factually accurate, and hyper-relevant.
It's a level of service that is impossible to achieve with either technique alone.
Back of the House: The AI Revenue Operations Analyst
The Base Model: We fine-tune an SLM on your historical sales data, CRM records, and win/loss analysis reports.
Fine-Tuning Teaches: Your company’s unique definition of a "qualified lead," the subtle patterns that indicate a deal is at risk of stalling, and how to structure a concise weekly pipeline report in the exact format your CRO prefers. It learns the skill of sales analysis.
RAG Provides: A sales manager asks, "Give me an update on the Acme Corp deal and flag any risks." The RAG system retrieves the latest emails with the client, transcripts of the last three Zoom calls, and the current state of the deal in Salesforce.
The Result: The system doesn't just list the facts. The fine-tuned brain analyzes the RAG-supplied context through its specialized lens. It responds: "The Acme Corp deal is in Stage 4. While the budget is approved, sentiment analysis of the latest call transcript shows hesitation from their lead engineer. We have a 65% win probability based on similar deals where technical stakeholders showed late-stage reluctance. Recommend scheduling a dedicated technical deep-dive."
This is not just data retrieval with a cheap insight atop it… it's actionable business intelligence.
The Journey is the Destination
So, when my client asked, "Why fine-tune when we have RAG?" my answer was simple.
"We start with RAG to make our AI knowledgeable. We graduate to fine-tuning to make it wise, efficient, and truly our own."
The choice is not RAG or fine-tuning. The strategic imperative is to master the journey from prompting -> RAG -> fine-tuning + RAG. This path allows you to de-risk your investment, prove value quickly, build trust through accuracy, and finally, scale your success with systems that are not only intelligent but also economically viable and uniquely yours.
It’s about moving beyond simple AI experimentation and starting to build true, sustainable, AI-powered systems that drive revenue and create a lasting competitive advantage.
It's the difference between renting an AI's intelligence and owning an AI asset.
Soon on Wealth Systems were are going to launch a special series focusing on applying AI to wealth building and preservation. We’ve been getting lots of requests for this series. It’s going to be epic (and tactical).
Friends: in addition to the 17% discount for becoming annual paid members, we are excited to announce an additional 10% discount when paying with Bitcoin. Reach out to me, these discounts stack on top of each other!
👋 Thank you for reading Wealth Systems.
I want to learn what topics interest you, so connect with me on X.
…or you can find me on LNKD if that’s your deal.
I started Wealth Systems in 2023 to share the systems, technology, and mindsets that I encountered on Wall Street. I am a Wall St banker became ₿itcoin nerd, ML engineer & family office investor.
💡The BIG IDEA is share practical knowledge so we can each build and optimize our own wealth engines and combine them into a wealth system.
To help continue our growth please Like, Comment and Share this.
NOTE: The content provided on this blog is for informational purposes only and does not constitute financial, accounting, or legal advice. The author and the blog owner cannot guarantee the accuracy or completeness of the information presented and are not responsible for any errors or omissions or for the results obtained from the use of such information.
All information on this site is provided 'as is', with no guarantee of completeness, accuracy, timeliness, or of the results obtained from the use of this information, and without warranty of any kind, express or implied. The opinions expressed here are those of the author and do not necessarily reflect the views of the site or its associates.
Any investments, trades, speculations, or decisions made on the basis of any information found on this site, expressed or implied herein, are committed at your own risk, financial or otherwise. Readers are advised to conduct their own independent research into individual stocks before making a purchase decision. In addition, investors are advised that past stock performance is no guarantee of future price appreciation.
The author is not a broker/dealer, not an investment advisor, and has no access to non-public information about publicly traded companies. This is not a place for the giving or receiving of financial advice, advice concerning investment decisions, or tax or legal advice. The author is not regulated by any financial authority.
By using this blog, you agree to hold the author and the blog owner harmless and to completely release them from any and all liabilities due to any and all losses, damages, or injuries as a result of any investment decisions you make based on information provided on this site.
Please consult with a certified financial advisor before making any investment decisions.