Blog

AI Trailblazers: Ryan Zahrai On How Zed Law Navigates AI Adoption in Legal Services

March 31, 2025
Prasad Kawthekar
Prasad Kawthekar
Table of Contents
What changed for sales productivity in 2023
What changed for sales productivity in 2023
Productivity Trend #1
Productivity Trend #2
What changed for sales productivity in 2023
What changed for sales productivity in 2023
Share on

In this edition of our AI Trailblazers series, Prasad Kawthekar sits down with Ryan Zahrai, founder of Zed Law, to discuss how his firm is implementing AI solutions while navigating the unique challenges of the legal industry. As Zed Law recently celebrated its first anniversary, Ryan shares insights on their AI journey, the tools they're using, and his vision for AI's role in legal services.

On Zed Law's AI Journey

Can you share your AI journey and how you've been incorporating these tools into your legal practice?

Ryan: We've been very willing to adopt and experiment with AI tools since before starting Zed Law full-time. We believe that without compromising client data, we can use AI to speed up many of the day-to-day processes that lawyers typically handle.

For example, instead of writing a long-form email response when a client asks for an employment agreement or advice on a capital raise, we use tools like ChatGPT. We've created our own Zed Law GPT within ChatGPT with dedicated teams and projects, using it as a mechanism to train and create output much faster than doing it within Outlook.

For legal output specifically, we frequently use Claude from Anthropic, as we find it produces better legal content than ChatGPT or OpenAI.

The key concern has always been privacy considerations. We've been very careful not to drag and drop client documents or confidential information into these platforms because we're not fully confident in their current structure regarding confidentiality. If there were a leak, we would be squarely to blame.

Instead, we use these interfaces more as chatbot assistance – like interacting with a qualified paralegal. I might draft an initial response, throw it into Claude, and ask it to "tweak this up for me, make it cleaner" rather than spending time getting it perfectly right myself. Then I can copy-paste that into Outlook.

This approach led us to discover Dashworks through someone who joined Zed Law who was developing his own legal AI tool in Australia. We've been using Dashworks with more confidence in its security posture. It plugs into our Notion and Slack, allowing our team to interface with AI in a responsive desktop manner.

On AI Strategy for Teams Starting Their Journey

What advice do you have for new teams trying to understand how to develop their AI strategy?

Ryan: I think teams should be comfortable with experimentation initially, but recognize that as a law firm, our ability to experiment is constrained by the regulatory nature of our work.

For those entering service-based businesses or other industries, I recommend actually signing up and paying for the established platforms. Generally, there's more robustness with paid versions, and according to their terms, they don't use as much of your input for training. Pay for the big players like ChatGPT/OpenAI, Claude (which is getting significant funding with Amazon heavily involved), or Gemini, and experiment with their suggested use cases for your particular service.

However, I strongly recommend never relying completely on the output. There have been numerous occasions where ChatGPT has produced content that looks great and seems confident, but when you actually interrogate it, especially with legal content, it's simply not accurate. Page references might be incorrect or cited case law might not exist.

As a law firm, we need to be more careful with how we use AI. Most businesses can probably accept 80-90% accuracy, which these platforms can deliver, but for lawyers, 80% isn't acceptable.

On Tools That Work — And Those That Don't

You've mentioned several tools you're using. Are there experiments that haven't worked out as intended?

Ryan: One particularly painful experience was with Microsoft Copilot. Last year, we engaged someone to work with us who was very pro-Microsoft. At the time, we were in a Google house on G Suite. This person convinced us that G Suite wasn't secure enough for a law firm and that we needed to move to Microsoft. They claimed to be one of the best Copilot client engineers in the world, promising to build us the best agents throughout our ecosystem once we migrated.

We spent significant time, energy, and effort on the migration. While we may be more secure from an infrastructure perspective now, Copilot simply doesn't do what we want it to do. It times out frequently, and while it's using OpenAI's backend (probably GPT-4), it has too many layers and isn't nimble enough to deliver the quality we need. It's probably two years behind other tools. This morning, we were literally discussing moving back to G Suite.

We intended for Copilot to sit within Outlook and not just summarize emails but also propose actions, outcomes, email responses, and create agents that could file documents directly to client files. Despite Copilot being built into all our Microsoft suite products, we haven't realized much value from it.

The pain point is that Copilot requires an annual license with no way around that, so we're paying annual fees for a product we've only used for four or five months and don't plan to continue with.

Even basic functionality is lacking. For instance, you'd expect Copilot within Microsoft Word to draft a contract clause in context with the clauses above and do it in markup, but it can't handle that instruction. Meanwhile, third-party AI tools are delivering these capabilities through simple plugins.

On Legal-Specific AI Tools

What do you think of tools like Harvey and Spellbook which are more verticalized on legal?

Ryan: Tools like Harvey, which target enterprise law firms as primary customers, will provide significant value because of the high costs that lawyers at those firms charge for mundane or repetitive tasks.

Harvey is great for large document digestion, such as determining whether documents are relevant in a litigation context. Spellbook and similar products are essentially "Grammarly on steroids," allowing you to make additional changes to documents based on best practices or identify risks.

While these tools are helpful, they're not super contextualized to Australia at the moment. They're very Americanized, and you can't easily get them to adopt the posture of an Australian lawyer. Australian and American lawyers operate very differently – Australian lawyers typically draft things to be understandable by their clients, while American lawyers often draft in ways that are less comprehensible to clients. So there's limited utility in Australia or the Southeast Asian markets currently.

On AI's Impact on Legal Service Strategy

How do you think about AI tools influencing your company strategy in general?

Ryan: If legal teams and other service-based businesses aren't already thinking about how they can front-end AI to service their customers or clients, they're already behind.

We initially had big visions to develop this ourselves and invested a decent amount of funds in that direction. However, we've since decided to invest in another company, Verity AI, which has already built exactly what we envisioned. Their platform looks like ChatGPT but with a nicer UI where clients can ask legal questions. The full correspondence chain between the client and AI agent can then be quickly reviewed by a lawyer with the click of a button.

This approach significantly reduces pricing for clients. Client expectations have evolved – they don't need lawyers from the first interaction to the end. They can go 70% of the way with a trained AI agent, and then the human lawyer provides the final verification, effectively acting as insurance.

On AI for Internal Operations

How do you think about AI in the context of internal operations and employee growth?

Ryan: We're very much focused on growing without adding additional headcount by instead adding AI agents. We're in talks with a company called Decidr.ai that specializes in agentic AI and developing agents to assist with various processes.

If I'm going to add headcount, they need to be fee earners – i.e., lawyers. For support staff functions, I want to replicate that through AI agents at different points in the workflow for client engagement and delivery. We're also exploring building these agents ourselves through Relevance AI, with whom we're in direct communication.

For the last two years, we've heavily integrated our systems through Zapier, and now we want to level that up with agents.

On the Future of AI in Legal Services

Looking forward, what do you hope AI can help you accomplish in the next six months to two years?

Ryan: One thing I've become acutely aware of is the highly variable standard of quality in legal services. Just like you can have good or bad dentists or doctors, that variability keenly exists in the legal industry as well.

I would love to see AI more widely adopted to augment and support legal practice so that the overall quality of legal output improves throughout the entire industry. I've seen many clients get burned by lawyers who simply weren't up to the task. It's important for any lawyer to recognize where they're not strong and either avoid that area or use tools to augment their practice.

While law firms might consider it commercially counterintuitive, I would also love to see price points for clients reduce so that the initial interaction with lawyers happens when they can add that final point of value, not during the initial burdensome discovery process. Clients often feel they're not getting value when paying lawyers to read through thousands of pages of documents.

I would love to reach a point where all that material can be synthesized efficiently, allowing lawyers to focus on adding value through strategic advice, which AI may not be able to do yet.

Parting Advice for Teams Exploring AI

Any other parting thoughts for teams exploring AI?

Ryan: The biggest learning I've taken in the last year reminds me of the crypto boom around 2016-2018, when everyone claimed to be crypto experts. I think we're in the same space with AI – it's only been around for consumer use for about two years, yet there are so many self-proclaimed AI experts.

My advice to startups is to experiment and get comfortable with AI. It's not scary if you start with even the most basic tool like ChatGPT. However, be very wary of anyone claiming to be an AI expert, since the technology has only been widely available for a short time.

Before engaging someone to build out your AI infrastructure, ask for a live demo of what they've built and references from previous clients. Get those references to go deep on the value provided. We didn't do that due diligence, and I've been burned as a result.

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Sign up for Dashworks

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Schedule a demo
Book demo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore more posts

Get a demo

  • Free trial
  • Instant onboarding