<![CDATA[Jam.dev blog]]>https://strawberryjam.ghost.io/https://strawberryjam.ghost.io/favicon.pngJam.dev bloghttps://strawberryjam.ghost.io/Ghost 6.10Mon, 22 Dec 2025 10:14:46 GMT60<![CDATA[How support and engineering collaborate at Attio]]>https://strawberryjam.ghost.io/how-support-and-engineering-collaborate-at-attio/693c3068ee666e0001c817a8Thu, 18 Dec 2025 16:57:39 GMT

Attio is an AI-native CRM used by GTM teams at Granola, Replicate, Railway, and Public. They have been scaling extremely fast over the last year, with headcount nearly doubling. When a company grows that quickly, the seams usually show up first between the support and engineering teams.

At Jam we've been obsessed with how the relationship between these two teams is evolving. We sat down with Elyse Mankin (Attio's Head of Support) and Philip Beevers (VP of Engineering) to talk about how they keep that relationship healthy (and productive!) as Attio scales.

You can catch the full version on YouTube or read our notes below.

The natural tension point between support and engineering

Elyse doesn’t sugarcoat the fact that there is a natural tension between both teams, just by virtue of the timeframes they operate in, and what they're exposed to. Support sees the pain firsthand because they interact with frustrated customers who are blocked on something. They (usually) operate in a reactive paradigm.

Engineering teams couldn't be further removed from that. They operate based on the product roadmap, which is set months in advance, with clearly scoped objectives and longer timeframes.

“There is a natural tension between support and engineering… and that’s okay.”

Building empathy from day one

The inevitable question here is how can teams reduce the natural tension between support and engineering? For Attio, the answer is empathy.

Every new employee (regardless of role) spends time in the support queue, early in their onboarding process. It’s small, but direct exposure to customers' problems changes how people think about the product.

More importantly however, it changes how people treat the support team.

Just remind people there's a human being on the other end of this stuff.

Build trust between the teams before crisis hits

Both Elyse and Phil kept coming back to the same idea: you can’t build trust only when things are on fire. You need to build up relationship “credit” before you need to spend it.

Phil suggested two ways for teams to do this:

  1. Shadowing: He recommends all engineers to go sit with their colleagues in support, watch their workflows and help with tickets.
“You’re going to have so many ‘Oh, I never knew that’ moments.”
  1. Tabletop incident drills: At Attio they occasionally run incident simulations with support and engineering in the room, to build a bond way before its tested.
"A low-stress simulation for a high-stress activity."

Clear escalation starts with clear context

According to Elyse, the most critical thing the support team must do during tense moments is to reduce ambiguity.

“What we need to do from the support side is very clearly communicate the situation - the customer needs… and any changes.”

Lack of clarity is what makes both teams get short with each other. Not in the serious incidents where everyone knows to be on their best behavior, but in the “important… maybe?” moments where the urgency and importance of a problem is not clear.

Experimentation is the default (except some situations)

We wanted to understand how Attio is thinking about the plethora of new AI tools for customer support.

Both leaders want their teams experimenting constantly, but they draw one hard boundary: don’t experiment mid-incident. Outside of that, almost everything is fair game. Elyse used an interesting metaphor:

“We might not get to determine the house that we live in, but we get to figure out how to decorate it.”

It's important to set the guardrails for experimentation first, so it doesn't end up being chaotic.

What they’re building towards in 2026: 24/7 support

Phil and Elyse's big 2026 push is to move toward 24/7 support. Both of them were quick to highlight that this is not a support or engineering project. It’s a customer service delivery problem that will be co-owned by both teams.

“Delivering a service at three and a half, four nines… that’s not an eight hours a day operation.”

We had such a great time jamming with Elyse and Phil! It's clear that they've spent a long time thinking about (and battle testing) their ideas internally, and we think their approach could be a blueprint for support and engineering leaders orienting their teams for 2026.

We're looking to have similar conversations with support and engineering leaders to understand how their workflows are evolving. Sound like you? Drop us a note 🍓

]]>
<![CDATA[How Twilio Builds AI at Internet Scale (w/ Head of AI)]]>https://strawberryjam.ghost.io/how-twilio-builds-ai-at-internet-scale-w-head-of-ai/69089b157e763100015d86a2Mon, 01 Dec 2025 16:38:05 GMTTwilio powers billions of messages between businesses and customers every month. At that scale, even the smallest model error - one missed detection, or one mistimed send - can affect millions of people.

We spoke to Zachary Hanif, who leads AI, ML, and Data at Twilio, about what it really takes to deploy AI across a product that operates at such scale.

Here’s what we learned:

The hidden tax of AI

For Zach, the real cost of AI emerges after launch.

“Building AI has a cost. But operationalizing and maintaining AI has its own cost that goes beyond the normal expectations of software engineering.”

AI systems, he says, age faster than code. Models drift, data changes, and what once worked perfectly starts to fail silently.

Twilio treats model maintenance like infrastructure. Every model is monitored for accuracy, retrained when the world shifts, and tracked against both technical and business metrics.

“Your model is a representation of how the world worked when it was trained - and sometimes that world changes very slowly, sometimes really fast.”

Deploy and forget is not an option.

When 99% isn’t good enough

AI leaders often talk about “human-level” accuracy. At Twilio’s scale, Zach says that bar doesn’t cut it.

“At scale, something with 99% efficacy is wrong a lot of the time.”

With billions of messages in motion, a 1% failure rate translates to millions of mistakes. That’s why his teams chase the final tenth of a percentile - the part that makes AI viable for production use, not just a fancy demo.

Reliability isn’t a nice-to-have when you're operating at the scale Twilio does.

Measure the code and the consequence

Zach draws a sharp line between technical success and business success.

“There was a famous case with the Netflix Prize - a team built a model that performed better, but it was too expensive to run. Netflix never used it.”

That story has become a lesson for how Twilio evaluates AI.
Each model is measured twice:

  • once for technical efficacy (F1 score, AUC, inference cost)
  • once for business impact (NPS, adoption, fraud reduction)

A model that scores well on paper but fails in production doesn’t ship.

The UX is hardest part

When it comes to integrating AI into products, Zach says the hardest part is designing the way users interact with it.

“The hardest problem isn’t usually the technical part. It’s finding product–market fit and making sure the thing you’ve done is maintainable.”

At Twilio, AI features go through countless design iterations before a single change. The challenge is helping users trust what the system does - especially when it makes decisions on their behalf.

For most companies, that’s where the work really begins.

From compliance to care

One of Twilio’s newest AI products is its Compliance Toolkit for Messaging, which automatically checks whether messages comply with local regulations before they’re sent.

For example, some EU countries enforce “quiet hours” after certain times of day.
Twilio’s AI can now detect when a company is about to send a marketing message past that threshold, and automatically delay it until morning.

“The right human gets the right message at the right time.”

The next five years

Zach sees AI gradually disappearing into the background - doing more work invisibly so humans don’t have to.

“As more becomes transparent to the end user, it just works. Twilio becomes less of a communication layer between machines and humans.. and more a communication layer between humans and humans.”

It’s about software that understands intent, and does the right thing automatically.

We had such a great time jamming with Zachary! You can catch our full conversation on YouTube, alongside episodes with engineering and product leaders from Intercom, Monday.com, and Vercel.

]]>
<![CDATA[Pipedrive's Internal AI Tools Powering 100,000+ Customers]]>https://strawberryjam.ghost.io/pipedrives-internal-ai-tools-powering-100-000-customers/69089f757e763100015d86e5Tue, 25 Nov 2025 16:39:00 GMT

Pipedrive is a CRM platform used by over 100,000 teams across 180 countries.
Behind the scenes, its engineering org runs more than 30 to 40 thousand customer interactions a month - and increasingly, a growing share of that ecosystem is powered by AI.

We spoke with Agur Jõgi, Pipedrive’s CTO, about how his team is building AI into their engineering workflow.

Here are the highlights from our conversation.

A completely new code review process

At Pipedrive, every pull request runs through an AI code review pipeline - three models in parallel.

“We run all our pull requests through three bots - OpenAI, Claude, and one smaller model. We constantly do champion-challenger testing.”

The results are pushed into a Slack channel called "Pipedrive Intelligence" where engineers can see how their code scored and what patterns the models spotted.

Instead of replacing human review, it’s a peer-learning feed. Engineers compare notes, learn from the AI’s comments, and discuss them in “lunch-and-learn” sessions.

It’s an evolving internal product - equal parts tooling and culture.

Juniors write faster. Seniors learn differently.

Agur saw what many teams see with generative coding tools: productivity spikes for juniors, more review pressure on seniors.

“Juniors started to write way more code. At first it slowed our seniors - they had more to review.”

But over time, seniors discovered value too. Since AI doesn’t always solve problems the way a human would, it forces veterans to rethink old habits.

“AI generates slightly different code. It makes me rethink my approach - maybe I’m stuck in my senior thinking.”

The result has been a new kind of continuous training loop, where reviewing AI-generated patterns sharpens both intuition and creativity.

Building "lazy" smart SREs

Before generative AI became mainstream, Pipedrive’s reliability engineers were already experimenting with ML to prevent incidents.

They trained models on years of Datadog logs to predict patterns that led to outages - and alert teams before they happened.

“It enabled us to start contacting customers pre-emptively, saying, ‘Hey, if you keep using the system like this, you might hit trouble.’”

Agur calls it “AI-enabled SREs” or, with a grin, lazy SREs.

“Being lazy is one of the most innovative skills a person can have. If you’re lazy and act smart, you get a bonus and still sleep through the night.”

The system now flags anomalies early enough for engineers to adjust configurations and prevent downtime - a powerful layer of resilience.

Build vs buy: learning by building

Pipedrive’s engineering excellence team builds most of its AI tools in-house. It’s slower upfront, but it deepens understanding.

“We want to understand how things work. We build it, learn, and if it doesn’t help, we throw it away.”

They’ll adopt third-party models for general-purpose tasks, but in core domains they prefer home-grown experiments. Agur frames it as a strategic trade-off between speed and literacy: the team learns faster when it builds the tools it uses.

A traffic-light system for safe experimentation

To move fast without crossing compliance lines, Pipedrive created a red-yellow-green framework that governs how teams use AI on code and data.

  • Green = safe to use with public or non-sensitive data
  • Yellow = consult an internal “champion” for review
  • Red = requires legal or data-protection approval
“We want every team to challenge new technologies, but there must be a common rule set to avoid getting into trouble while experimenting.”

It’s a lightweight way to balance innovation with GDPR and EU AI Act requirements - giving engineers autonomy while protecting customer trust.

The long view

AI has prompted a very involved learning culture within Pipedrive's engineering teams.

When models challenge your code, flag patterns in production, or surface insights from tens of thousands of user interactions, they force teams to see their systems differently.

“It’s a deep learning curve for us - but an exciting one.”

We had such a great time jamming with Agur! You can catch our full conversation on YouTube, alongside episodes with engineering and product leaders from Intercom, Monday.com, and Vercel.

]]>
<![CDATA[How Plain is reshaping customer support (w/ head of community Susana de Sousa)]]>https://strawberryjam.ghost.io/how-plain-is-reshaping-customer-support-w-head-of-community-susana-de-sousa/691de46b538fa40001f8e86dMon, 24 Nov 2025 16:44:00 GMT

Susana de Sousa leads Community at Plain, an AI-native customer support platform used by teams at Cursor, Raycast, Granola, and n8n.

We spoke to Susana about why she thinks the old customer support playbook is dying and why 2026 might be the year of the support engineer.

Susana is a true veteran in the customer support space. She drew on her experience at Airbnb and Loom to explain how support leaders should orient themselves at a time when the tools, expectations, and roles are all changing at once.

Catch the full conversation on YouTube, or read our notes below.

It's a special time to be working in customer support

Support is going through a generational shift. The tools of the trade are changing, and the expectations placed on support leaders are changing even faster. Susana describes this moment as one of both uncertainty and opportunity:

“It's a very, very special time to be working in customer support. The old playbook is dying, or maybe it's already dead. And the new one isn't fully written yet. There's a million opportunities, there's a lot of excitement for the possibilities.”

But that excitement sits alongside real tension. Support leaders are feeling two distinct kinds of pressure:

Personal/job pressure:

With the tools changing so rapidly, there is a growing sense of anxiety among support teams, especially junior members, around the future of their role.

“We have the job anxiety bucket.. being afraid of losing your job or maybe not even knowing what your job is going to look like in the future.”

Executive pressure:

For support leaders, the pressure is coming from further up the org chart. The C-suite wants to make sure their support teams are leveraging the latest tools to boost resolution rates, and bolster customer success.

“There's a different bucket, which is the executive pressure, right? So if everyone else is doing this, then we should too. And that's typically like not the best approach to adopting new technology.”

While these tensions are inevitable given the rate of change, Susana outlined some frameworks for support leaders to help their teams transition better. Let's talk about a few:

“Support isn’t just about replying to tickets”

On LinkedIn, Susana wrote:

“Support isn't about replies to tickets. It's about removing the need for them.”

This frames support teams as proactive contributors, and not solely reactive. With AI taking care of the low leverage work, support teams can actually use their time and energy to solve high leverage problems - the kinds that if solved, will remove the need for tickets in the first place.

For instance, when she worked at Loom, she stopped seeing tickets as tasks and started treating them as upstream signals:

“Every question was a signal. My job really wasn't to answer the question, but it was to make sure that the question never needed to be asked again.”

This narrative shift may seem obvious, but is not how most support teams are perceived within companies.

AI won’t fix a broken foundation

Support has lived through multiple hype cycles: automations, self-serve portals, live chat, proactive messaging, support ops. Now AI is the latest wave, and leaders feel pressure to adopt it whether they’re ready or not. Susana has seen this pattern for a decade:

“We've had automations, self-serve, live chat. They've all had their moment. We've been talking about reactive to proactive mindset shift for years. Support operations started to become a thing, and now everything must be AI.”

But AI only works if the underlying foundation works. If documentation is outdated, or workflows are ambiguous, AI will simply accelerate the chaos.

“You can't just slap AI on top of broken products and, and hope for the best.”

Before teams deploy a chatbot or build an agent, she encourages them to fix the basics: documentation, routing, ticket quality, instrumentation, and product health. AI amplifies whatever already exists - good or bad.

There’s no silver bullet in customer support

Susana is allergic to blanket advice. Every business has a different approach to customer support, and will inevitably have different needs.

“There is no one solution [that] fits every single business. Every single support team is different, much less the product or the business.”

Her approach is to work backward from the ideal state. What are some desired outcomes, and what are the systems that will get the team there? She pushes leaders to ask very specific questions:

“What would our success look like if we did X instead of Y? If we removed x% of technical issues, what would our contact rate look like?”

How to actually partner with engineering

In Susana's experience, the most effective customer support teams almost always have strong interdependent relationships with the engineering team.

"One thing that I like to ask is how can I make my engineering team's life easier?"

Communication between support and engineering is critical, especially because both teams are often working towards different objectives.

“We're working in different realities, right? We have different metrics, we have different goals, we have different systems.”

If the lines of communication aren't strong, engineering teams end up associating the support team with high-stress, reactive tasks:

“I feel like engineers really only hear from support when things go wrong, right?”

To counter this, she drew on her experience at Loom again, where her team tried to act as multipliers, instead of problem-bearers. She encourages support teams to pick up the technical skills necessary to proactively go and solve issues if possible, and if not, at least do the leg work to make engineering's job easier.

“They would go into any troubleshooting with the goal of. Not just like fixing the issue, but like actually fixing the issue forever.”

Zooming out, she also recommends support and engineering leaders to sit down and figure out where their metrics and goals overlap, to actually incentivize better collaboration.

2026: the year of the support engineer

Susana thinks the support role will become increasingly technical, especially as AI takes on the repetitive, low leverage work.

“AI is taking the tier one type of work. What's left is more technical, more complex.”

That doesn’t mean everyone in support must become full-time engineers of course, but the directional trend is clear:

“Companies are realizing that they need their support teams to become more technical, and I truly believe that this is going to be a role that's going to truly define how modern support gets done.”

Learning technical skills has never been easier

Susana started this year feeling boxed out of the “builder” world. But with the current set of vibe coding tools, she quickly figured out how to do basic stuff that she might have called an engineer for in the past. She encourages every support leader to dabble with the tools and try things. Picking up technical skills has never been easier.

“I started this year having no idea how to deploy like my own website, work with databases like, you know, pull data from an API and now I'm doing all of that just because. Technology evolved in such a way where people that are non-technical, such as myself. Um, can do that and can access that, that power.”

Her advice to her younger self (and to today’s support leaders) is about how you think, not which tool you use:

“Try to learn as much as possible about, you know. Systems thinking about, um, thinking about a problem in different ways… figuring out the steps towards a solution really teaches you how to think creatively as well.”

Actionable takeaways for support leaders

  • Support isn't about replies to tickets. It's about removing the need for them.
  • Make every question a signal and aim to make sure that the question never needed to be asked again.
  • Fix foundations first: product quality, docs, routing, and support ops, because AI is not going to help you if your foundations aren't right.
  • Reject blanket solutions. There's no one solution fits every single business.
  • Show up as a multiplier for engineering. Start by asking “how can I make my engineering team's life easier?”
  • As a leader, invest in support engineers. Pick up technical skills yourself. It's never been easier.
  • Build systems thinking and creative problem solving as durable skills, regardless of the latest buzzy tool.

We couldn't agree more with Susana when she said “It’s a very, very special time to be working in customer support.” The old playbook may be dead, but if you get the foundations and the partnerships right, the next one is for the best support teams to write.

We had a great time jamming Susana - we're looking to have similar conversations with support leaders to understand how their playbooks are evolving with AI. Sound like you? Drop us a note 🍓

]]>
<![CDATA[New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more]]>https://strawberryjam.ghost.io/new-in-jam-6-updates-for-faster-bug-fixing/691f5d1aa2bbeb00013da17fThu, 20 Nov 2025 18:27:34 GMT

Today we’re excited to share six new updates to make Jam even faster & easier to use for you and your customers.

1. MacOS shortcuts

Now you can can request Jams from your customers simply by typing /jam wherever you’re chatting with them. Here’s how to do it.

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

Just follow the step by step instructions on how to set up your domain, right in your dashboard settings.

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

3. Dashboard filters

You’ll see you can easily filter all your Jams by type, creator, and source.

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

4. Jam for iOS redesign

We completely redesigned Jam for iOS, so you can easily create bug tickets from your phone. Check out the new record button, it’s our favorite 🤩

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

5. Linear embeds

Paste a Jam link in a Linear ticket or comment, and engineers can see the embedded video with network and console errors right there.

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

6. Collapsible folders

Keep your workspace nice and neat by collapsing folders. Plus, you can see how many Jams are inside each!

New in Jam! Mac shortcuts, dashboard filters, customer recording domains + more

That’s a wrap. Happy Jamming!

]]>
<![CDATA[Just launched! Get customer bug videos w/o private data]]>https://strawberryjam.ghost.io/just-launched-get-customer-bug-videos-w-o-private-data/690cc6f3d5fa830001b29190Thu, 06 Nov 2025 16:07:13 GMT

Today we launched built-in privacy for customer support Jams. Now whenever your customers record a Jam, any sensitive info on their screen will be blurred out.

It works automatically, so your customers don’t have to do anything. Whether you’re using Intercom, or any other customer support tool, all you have to do is send a recording link. Fields such as credit cards & passwords will appear blurred as soon as they start recording.

Now it’s easier than ever to see exactly what your customers see, without any extra info in the process. Reach out to our team to add auto-blur to your Jam plan.

Curious to see how auto-blur works? Read our docs to learn more.

Happy Jamming!

]]>
<![CDATA[Just launched! Jam with your customers in Zendesk, Intercom, HubSpot + more]]>https://strawberryjam.ghost.io/just-launched-jam-with-your-customers-in-zendesk-intercom-hubspot-more/690b879ebd835e0001eb19aaWed, 05 Nov 2025 17:29:17 GMT

Now Jam for Customer Support connects to any helpdesk! So you can triage customer bugs even faster.

It’s so easy to use, let us show you.

Just launched! Jam with your customers in Zendesk, Intercom, HubSpot + more
Just launched! Jam with your customers in Zendesk, Intercom, HubSpot + more

Just launched! Jam with your customers in Zendesk, Intercom, HubSpot + more

Just launched! Jam with your customers in Zendesk, Intercom, HubSpot + more

And the best part? Your customers don’t have to install anything. Just click, record, and send. You get the full context: metadata, dev logs, and video - packaged and ready for dev.

Happy Jamming!

]]>
<![CDATA[How Givebutter Uses AI to Handle 60% of Support Tickets]]>https://strawberryjam.ghost.io/how-givebutter-uses-ai-to-handle-60-of-support-tickets/68fb711e5f5f690001c15865Wed, 05 Nov 2025 17:16:20 GMT

Sunny Ellis runs support at Givebutter, a fundraising platform that’s helped nonprofits process over $5 billion in donations.

Because most Givebutter users aren’t technical, support is core to the mission. Sunny’s team now uses AI to deflect 60% of inbound requests, and we wanted to learn how they pulled it off.

Here are the highlights from our conversation.

Automating the easy 60%

Givebutter’s support team has scaled from five to thirty people in just four years. But their biggest productivity boost actually came from automation.

“We’re deflecting about 60% of our inbound conversations right now with AI.”

Sunny’s team uses Intercom’s Fin to automatically handle repetitive questions like “How do I log in?” or “Where’s my receipt?” Those tickets never reach a human anymore.

This lets them respond to customers faster, and frees up more time for human support agents to focus on complex or emotional issues that actually do need human judgment.

Fix your docs

Before adopting AI, Sunny realized the success of any automation depends on one thing: clean, accurate documentation.

“If your documentation isn’t organized or complete, get that in order first. It’s only as good as what you feed it.”

The Givebutter team overhauled their help center before introducing Fin. Every article was reviewed, rewritten, and updated. That investment paid off. Fin could be set up in minutes and has proven much more reliable when serving customers.

AI doesn’t replace empathy

Sunny’s hiring philosophy shifted after bringing AI into the workflow. With automation handling the simple questions, the remaining tickets are tougher and more emotionally charged.

“By the time a conversation gets past AI, it’s complex. So we hire for empathy and critical thinking, not just tech skills.”

The team’s north star: let AI take care of efficiency, and let humans take care of connection.

Building internal copilots with Claude

Beyond customer deflection, Sunny is now vibe-coding with Claude to power internal tools for her support agents.

She's been building an internal answer bot that centralizes information from multiple sources and helps agents find context faster.

Agents even use it to draft responses when they’re tired or dealing with frustrated customers.

Advice for support leaders getting started with AI

Sunny’s biggest advice for teams experimenting with AI: don’t aim for perfection.

“The first few hundred iterations might not be great. It’s a never-ending process.”

Start with your documentation, launch small, and keep refining. Each improvement compounds.

Key takeaways for support teams

  • Clean your docs before adding AI. This will hugely impact how useful the tools are.
  • Automate deflection first. It delivers the fastest ROI.
  • Hire for empathy and problem-solving. AI can handle the rest.
  • Treat AI deployment like an ongoing project, not a one-time thing.

Support at Givebutter isn’t becoming less human. If anything it’s becoming more human. AI handles the repetitive tasks so people can focus on care, nuance, and empathy.

We had a really great time jamming with Sunny! You can catch our full conversation on YouTube, alongside our previous conversations with product and engineering leaders at Intercom, Monday.com, Vercel, and more.

]]>
<![CDATA[Taskrabbit CTO Scott Porad on Building with AI]]>https://strawberryjam.ghost.io/taskrabbit-cto-scott-porad-on-building-with-ai/6904dd4782f9bd00011c2e9aMon, 03 Nov 2025 17:20:00 GMT

Taskrabbit is a marketplace delivering over 1.6M everyday home tasks every year, serving customers since 2008.

Seventeen years is a long time in software. In that time, Taskrabbit has grown from a scrappy startup to a complex system that thousands of engineers have touched.

We spoke with Scott Porad, Taskrabbit’s CTO, about what AI really means for mature engineering teams. His take: the next decade of software development won’t be defined by how much code AI can write, but by how confidently humans can ship it.

Here are the highlights from our conversation.

Engineers have become reviewers

For Scott, AI is reshaping what an engineer spends their time doing.

“The agent is going to write code for you - and you know it’s only 80% correct, but you don’t know which 20% is wrong.”

That uncertainty is the new frontier. As LLMs become teammates, engineers will need to get better at spotting what’s off rather than writing every line from scratch. In his view, code review and QA intuition will soon be the most valuable engineering skills.

“We’re going to have to start having pair reviewing, just like pair programming.”

At Taskrabbit, that means preparing for a world where reviewing becomes the primary learning path for junior developers. A new kind of apprenticeship where they level up by analyzing AI-written code alongside senior engineers.

Velocity isn’t about typing faster, it’s about reducing risk

For a company with a long-lived codebase, Scott says the bottleneck isn’t writing new code, it’s actually taking the time to make sure that the changes won’t break anything.

“If I could wave my magic wand and have perfect test automation, I’d take out a huge amount of risk of changing a large, complex system.”

That’s why he’s more excited about AI’s potential in testing than in code generation. Perfect test automation (unit, integration, end-to-end) would let engineers move fast with confidence.

“We spend so much time trying not to make mistakes because we don’t totally understand our systems.”

In other words, the real productivity unlock will come when AI makes every change reliable.

Assisted engineering, not autonomous engineering

Despite the hype around “AI agents,” Taskrabbit is taking a measured approach. Today, AI is most useful in assisted engineering - helping with documentation, small refactors, and, especially, code review.

Scott points to a recent OpenAI study comparing two LLM roles: a “coder bot” that wrote code, and a “manager bot” that reviewed it. The reviewer was consistently more effective.

“The technology is better at detecting whether a pattern someone is following is good, than originating that pattern from scratch.”

That insight has shaped Taskrabbit’s strategy. Instead of expecting AI to build entire features, they’re building workflows around AI reviewers: tools like CodeRabbit (no relation to Taskrabbit) that spot issues faster and surface context before merge.

The rise of “satellite apps”

For now, legacy codebases still limit what AI can safely touch. But Scott sees a clear on-ramp: “satellite apps.”

These are small, isolated tools that don’t depend on the core system: a new consent form, a marketing landing page, an internal admin dashboard.

“Someone was able to get something like Lovable to build a disclosure form in a completely automated fashion - and got it pretty close.”

For teams working in older stacks, satellite apps are the low-risk sandbox for experimenting with AI. They let you test workflows, measure time savings, and build confidence before expanding into production systems.

Culture before code

When it comes to adoption, Scott’s playbook is simple: don’t mandate - motivate.

“I encourage a lot. I don’t mandate.”

He compares it to the arrival of IDEs years ago: some engineers resisted, but once they saw the speed and clarity benefits, they never looked back.

Taskrabbit’s approach mirrors that history. Rather than top-down enforcement, they’ve formed small eval groups: engineers testing tools, comparing results, and evangelizing what works.

Put simply, adoption grounded in curiosity and credibility, not compliance.

Redefining apprenticeship in the AI era

Scott acknowledges a looming problem: junior engineering roles are disappearing. If AI writes most of the boilerplate, where do new engineers learn the fundamentals?

“The only way they’ll get to be good reviewers is to do it with people who are better reviewers.”

That’s why he envisions a new kind of mentorship, one centered on reviewing AI output instead of writing greenfield features. Peer reviewing will become the new pair programming, and the ability to critique machine-written code will define the next generation of software craftsmanship.

The long view: software that builds software

Zooming out, Scott sees AI as the start of another industrial shift.

“We just went through an information revolution over the last 30 years. Now we’re going through another one.”

This next revolution will reshape what it means to build. Engineers will move up the abstraction ladder: designing workflows, validating AI output, and architecting systems that learn from every deploy.

We had such a great time jamming with Scott! We’ve been having similar conversations with product and engineering leaders at top engineering orgs (like HeyGen, Wix, Vanta, etc) to unpack how they actually build with AI. You can find previous episodes on YouTube.

]]>
<![CDATA[Inside Gong's AI Stack w/ Chief R&D Officer Ohad Parush]]>https://strawberryjam.ghost.io/inside-gongs-ai-stack-w-chief-r-d-officer-ohad-parush/690233f1ecd6730001a2975dFri, 31 Oct 2025 16:34:35 GMT

Ohad Parush leads R&D at Gong - a tool that empowers revenue teams at 50% of Fortune 10 companies, as well as teams at Rippling, PitchBook, and Upwork.

Gong has been an early adopter of AI internally, and it has completely changed how their engineering team builds, reviews, and ships new products and features.

In our conversation, Ohad shared how AI has become a peer programmer, how engineers are learning faster than ever before, and why changing culture is often harder than changing technical capabilities.

Here are the highlights from our conversation.

Engineers aren’t being replaced

It’s not going to replace engineers, but it’s definitely going to evolve engineers into a newer version of themselves.

At Gong, AI is seen as a powerful tool that augments itself with the engineering team's existing stack and workflows. Developers now code alongside AI tools that generate drafts, suggest refactors, and handle repetitive tasks.

What becomes crucial here is judgment: engineers who know how to direct AI in a productive direction, as well as review and challenge it's output, instead of just "vibe coding."

The productivity curve isn’t flat

We asked Ohad to rate AI's impact on his engineering team's velocity on a scale of 1 to 5. His answer was nuanced:

Some actions are a four - we’re saving huge amounts of time. Others are a two - there’s still hype, bugs, and control issues.

AI yields massive time savings in structured workflows (like data summaries or report generation). But deep engineering still needs human rigor.

Ohad recommends leaders to start narrow - deploy AI where tasks are repetitive, measurable, and low risk.

Testing: where AI already works

AI dev tools like Claude Code and Cursor have become integral to testing at Gong. They generate unit tests, catch regressions, and increase coverage.

The obvious answer today is testing… we are writing better code with our AI dev tools.

It’s one of the few areas where AI consistently outperforms humans in both speed and accuracy.

AI compresses engineer onboarding time

Engineers can ramp up faster because copilots remove the friction of learning new frameworks or APIs.

One of my engineers wanted to start using Iceberg on top of S3. In the past it would’ve taken two to four weeks. It was done in two days.

Tasks that once took weeks are now completed in days - and engineers spend more time building, less time reading docs.

Scaling security with AI adoption

Gong works with half of all Fortune 10 companies, so safety is a huge priority. Gong runs all AI-assisted development inside secure sandboxes and layers in human code review.

We’ve invested a lot in security… it’s not just ‘go out and start using it.’ We manage our code carefully and follow best practices from Anthropic and our internal pod security team.

Security concerns are fundamentally baked into all product decisions at Gong.

Hiring hasn’t changed, but expectations have

Gong’s hiring playbook still values analytical ability and debugging over prompt-engineering wizardry.

AI can make junior engineers much better, but we’re still looking for people who can analyze, debug, and design. So far that hasn’t changed.

AI amplifies good engineers. Ohad thinks it's unrealistic to expect AI to spawn them - especially if their first-principles are weak. Their training focuses on how to think with AI, not how to prompt it.

Actionable takeaways for builders

  • Use AI as a peer, not an autopilot.
  • Start with structured, repetitive tasks where accuracy is easy to measure.
  • Automate handoffs and summaries to cut internal coordination time.
  • Focus AI effort on testing and onboarding - that's where the clearest ROI is.
  • Layer security reviews and logging around every AI workflow.
  • Keep hiring for analytical thinkers, then teach them how to leverage AI.

Gong’s experience clearly shows that the next wave of engineering productivity won’t come from replacing people. AI makes great engineers faster, new engineers sharper, and teams more connected.

We had a really great time jamming with Ohad! You can catch our full conversation on YouTube, alongside our previous conversations with product and engineering leaders at Intercom, Monday.com, Vercel, and more.

]]>
<![CDATA[How HeyGen uses AI w/ Head of Product Engineering Nikunj Yadav]]>https://strawberryjam.ghost.io/how-heygen-uses-ai-w-head-of-product-engineering-nikunj-yadav/6905011c82f9bd00011c2f1dFri, 31 Oct 2025 16:33:00 GMTHeyGen is a rocket ship in the AI video space with now over 85,000 customers using their platform to create hyper-realistic video avatars. We spoke to their Head of Product Engineering Nikunj Yadav to unpack how his team uses AI across the dev cycle.

We get into how AI is helping backend engineers write front-end code, helping non-engineering teams self-serve technical answers, and helping everyone ship end-to-end with fewer handoffs.

Here are our takeaways:

From specialists to assistive full-stack builders

For Nikunj, AI has blurred the line between backend and frontend engineers.

“Especially writing front end is much, much easier than it would have been without it.”

His team uses Cursor to generate front-end components in React and Tailwind, including engineers who don’t consider themselves UI specialists. The impact compounds: fewer handoffs, less coordination overhead, and faster end-to-end delivery.

“Backend engineers can now write the front end themselves - they know just enough to review what AI generates.”

Nikunj predicts this shift will eventually change how teams are structured. Every engineer can now take ownership across the stack.

Where AI delivers the most lift: codegen and testing

When asked which part of the dev cycle AI helps most, Nikunj said code generation is still the biggest boost.

But right behind it? Testing.

“We went from zero backend tests to reasonable coverage, and AI got us a lot of the way there.”

Developers at HeyGen now prompt AI to write unit tests - supplying the test cases, mocking instructions, and examples of past tests. It’s a practical workflow: they still design the logic and edge cases, but AI handles the repetitive setup.

The result is better coverage, faster feedback, and far less friction between coding and validation.

The dream: self-healing test suites

Still, Nikunj sees a huge opportunity ahead.

“Maybe there’s a CLI tool we could write that updates old tests automatically. That’s the dream.”

Most teams know the pain of outdated tests breaking every other CI run. A repo-aware AI that could rewrite and adapt tests after each change would be a genuine velocity unlock.

AI in review: good, but not great (yet)

HeyGen’s team also uses Graphite, both for AI-assisted review and for stacked PRs - a practice that lets engineers push multiple small changes in parallel.

The AI features have shown promise, catching minor bugs and improving with time. But Nikunj sees more potential.

“It’s still operating at a very fundamental level, like this thing looks broken. It could do more, like ‘this is a code smell, rethink this design.’”

For now, Graphite acts as a kind of intelligent CI: a first line of review before human eyes. Nikunj thinks the future lies in pattern-level feedback, where AI can spot design issues, not just syntax errors.

Hiring for problem-solvers, interviewing for AI fluency

A lot has changed at HeyGen, but one thing that hasn’t changed? What they look for in engineers.

“We’re always hiring people who are really good at problem solving.. that’s been true forever.”

What has changed is how they interview. Candidates are explicitly allowed to use AI tools like Cursor during their coding exercises.

The goal isn’t to see if they can memorize syntax - it’s to see how they think when working with AI output that’s imperfect.

“Watching how someone reviews and debugs AI-generated code gives you a whole new level of insight.”

It’s an interview design that reflects the new normal: AI will always make mistakes, and great engineers are the ones who can spot and fix them, fast.

How support and engineering collab

Not every increase in productivity comes from code generation. Non-engineering teams (especially customer support and success) use AI to self-serve customer questions, dramatically cutting down how often engineers get pulled into Slack threads or support requests.

“If an engineer isn’t getting bothered a few times a day, they’re just going to do a lot more.”

AI, in this sense, protects focus by reducing the number of contexts engineers have to switch.

In conclusion

At HeyGen, AI has expanded what the engineering team is capable of. It has made backend engineers front-end capable, brought testing coverage up from zero, and freed developers from constant context-switching. Across the conversations we've had in this series, there's a clear pattern. Companies that create a culture around frequent, light-weight experimentation are deriving the most value out of AI.

We had such a great time jamming with Nikunj! You can catch our full conversation on YouTube, alongside episodes with engineering and product leaders from Intercom, Monday.com, and Vercel.

]]>
<![CDATA[How AI is changing Customer Success w/ Alex Turkovic]]>https://strawberryjam.ghost.io/how-ai-is-changing-customer-success-w-alex-turkovic/68f7db02d02d8b0001be2d48Wed, 22 Oct 2025 17:45:17 GMT

Alex Turkovic spends a lot of time thinking about where customer success is headed. He’s the Sr. Director of CX at Belfry, and the host of the Digital CX Podcast, where he talks to top customer success leaders about how tech is reshaping the craft.

We spoke to Alex about how AI is changing the way CX teams operate - from spotting churn before customers complain to hiring automation specialists over traditional CSMs.

Here are some of the highlights from our conversation:

From firefighting to forecasting

Historically, CSMs have spent most of their time reacting, whether that's churn, escalations, or some other risk indicator.

AI is starting to flip that. Tools like Vitally and Gong now make it possible to analyze ticket sentiment, NPS scores, and call transcripts to flag risks much earlier.

“You shouldn’t wait for the customer to tell you they’re unhappy. AI can show you when engagement drops or sentiment shifts long before that happens.”

Alex says top CX teams are using those signals to intervene weeks before a renewal conversation happens.

Automate prep, but keep empathy human

When issues do surface, Alex’s team uses AI to summarize every account touchpoint (usually spread across emails, calls, and tickets) so CSMs walk into conversations with all the context they'd need.

AI isn't replacing the art of nurturing customer relationships (yet), but it does take away the drudgery of gathering information and gives humans the space to actually listen.

“AI can do the triage and the homework.. It can’t do empathy.”

Building the digital CX function

Alex hosts many CX leaders on his pod, and he sees a new kind of role emerging inside customer success: the automation specialist.

Instead of hiring more CSMs, teams are hiring people who can build workflows using tools like n8n and Zapier, connect CRMs to support systems, and maintain data hygiene.

“Every CX org needs someone who thinks like an engineer. The modern customer experience team has a systems layer under it.”

That “digital CX” layer connects data from sales, success, and support - giving teams a single view of the customer and a foundation for predictive insights.

The new skillset for CX

Tomorrow’s CX leaders will blend empathy with automation chops. They’ll need to be as comfortable writing an n8n workflow as they are running a QBR.

For Alex, CX is not becoming less human. If anything, AI frees up CSMs to do things that are more human: especially the subtle emotional work that meaningful customer relationships are built on.

“If AI handles the noise, humans can handle what matters”

Actionable takeaways for builders

  • Predict churn before it happens. Use sentiment and engagement signals across calls, tickets, and surveys to surface risk early.
  • Automate triage and prep. Let AI summarize history and context so CSMs focus on resolution and empathy.
  • Hire automation specialists. Build a digital CX layer that connects data, workflows, and customer systems.
  • Embrace new tooling. Platforms like Vitally, Gong, and n8n are now essential CX infrastructure.
  • Upskill for hybrid roles. Future CX pros will combine customer empathy with low-code automation skills.

We had a really great time jamming with Alex! You can catch our full conversation on YouTube, alongside our previous conversations with product and engineering leaders at Intercom, Monday.com, Vercel, and more.

]]>
<![CDATA[How Merge uses AI (w/ Co-founder & CTO Gil Feig)]]>https://strawberryjam.ghost.io/how-merge-uses-ai-w-co-founder-cto-gil-feig/68f7b2abd7f5de000127151cTue, 21 Oct 2025 20:21:53 GMT

Merge helps companies add hundreds of integrations through a single API. The product powers data connectivity for orgs like OpenAI, Perplexity, Ramp, and Brex.

We spoke to Gil Feig, Merge’s co-founder and CTO, about how his team is building with AI.

Here are some of the highlights from our conversation:

AI as a company-wide mandate

At Merge, using AI isn’t optional.

Gil calls it their “mini Shopify moment.” Every employee, from engineering to recruiting, is required to use AI tools in their day-to-day work.

“It’s an expectation that you’re going to use AI. Not just for Merge, but for your own career and your future.”

The goal is to help people build lasting skills. Merge plans to eventually include AI proficiency in performance reviews, but the team is being given time and training to get comfortable first.

Training engineers to prompt, not just code

Merge uses Windsurf for both code generation and code review. Gil jokes that sometimes Windsurf writes the code, then reviews it on GitHub.

But the bigger shift has been cultural, not technical. Many engineers lose trust after one failed prompt. Gil’s team is learning to see prompting as a skill in itself.

“If you tell it what not to do, that doesn’t help. You have to tell it exactly what to do next.”

By improving how they communicate with AI (adding structure, context, and examples) the Merge team has gone from small productivity gains to dramatic speedups in how quickly they can ship.

Code gen thrives in greenfield work

AI’s biggest impact so far has been on new products, where there’s little tech debt.

Gil estimates their newest product, Agent Handler, was 90% AI-built. The combination of Windsurf and strong prompting allows the team to ship new features in days instead of weeks.

By contrast, applying AI to older codebases is still challenging. (Something Vanta's Iccha Sethi told us when we spoke to her). Large, complex repos often exceed context-windows, and subtle changes can’t yet be fully automated.

For now, Gil’s rule of thumb: AI accelerates creation more than maintenance.

Beyond engineering: recruiting as an unexpected AI use case

Outside engineering, one department at Merge has fully embraced AI: recruiting.

Recruiting shares a lot of similarities with sales (outreach, qualification, and pipeline management for instance) and AI excels in those areas. It helps Merge’s team identify candidates, personalize outreach, and fill roles faster.

“It’s great for top-of-funnel sourcing. Recruiting is like sales - but for non-executive hires, AI can run most of the playbook.”

Interestingly, sales hasn’t seen the same lift yet. For enterprise deals, nuanced relationships still matter more than volume. AI is great for scale, but it can't be used for persuasion, especially the kind that is needed for high touch, high value deals.

Hiring for curiosity, not credentials

Probably the most common thing we've heard from leaders we've talked to. AI hasn’t changed the skills Merge hires for. The meta-skills required to be a great hire (curiosity, agency etc) have become even more important post-AI.

The best hires aren’t the ones who already know how to use AI; they’re the ones eager to learn it.

“We’ve had people become AI experts here just by experimenting, reading, and talking to AI itself.”

Gil prioritizes drive, curiosity, and adaptability over hard credentials. Tools will change; the meta-skills won't.

Low-stakes experimentation builds momentum

Some of the most surprising wins come from employees using AI to solve small, everyday problems.

When one teammate built a web app with Lovable to generate company email signatures (something that used to take hours to standardize), Gil realized how powerful these low-stakes experiments can be.

“You’d never waste time building an app for that.. now someone can in ten minutes - and it works.”

Those small experiments build confidence. They also create a bottom-up culture of continuous tinkering.

Actionable takeaways for builders

  • Make AI usage a cultural default. Adoption shouldn’t be a side quest. It needs to be how work gets done.
  • Treat prompting like a craft. Give teams concrete frameworks and examples to improve prompt quality over time.
  • Use AI for creation, not maintenance. AI performs best on new surfaces where there’s less legacy complexity.
  • Expand beyond engineering. Recruiting and operations often see ROI faster than product teams.
  • Hire learners, not experts. Curiosity and adaptability compound faster than credentials.
  • Celebrate micro-experimentation. Encourage small, creative AI wins that build the tinkering muscle.

We often talk about AI adoption as a technical shift. Gil reminded us it’s also a cultural one. By encouraging use, normalizing learning, and celebrating small wins, Merge is teaching its people to think alongside machines.

We had a great time jamming with Gil! We’ve been having similar conversations with product and engineering leaders at top engineering orgs (like Vercel, Wix, Vanta, etc) to unpack how they actually build with AI. You can find previous episodes on YouTube.

]]>
<![CDATA[How is AI Changing Customer Support in 2025?]]>https://strawberryjam.ghost.io/the-complete-guide-to-ai-customer-support-tools-tested-in-2025/68f252ef8226f200010efb98Tue, 21 Oct 2025 20:10:14 GMTCustomer support teams answered 12 billion tickets in 2024. Roughly 40% of them were questions AI could have handled automatically. The technology has shifted from basic keyword-matching chatbots to systems that actually understand context, detect frustration in real-time, and route complex issues to the right team without human intervention.

At Jam we spend a lot of time talking to support, engineering, and customer success leaders at companies across all stages.

This guide breaks down everything we've learned through those conversations. How is AI actually changing the customer support function, how leading platforms compare across categories like conversational agents and sentiment analysis, and what you'll encounter when implementing them in a real support operation.

The state of customer support today (2025)

Old chatbots from five years ago worked on keyword matching. You typed "refund" and got a canned response about the refund policy, even if you were actually asking whether refunds include shipping costs. Modern tools understand context, follow the thread of a conversation across multiple messages, and can even detect when they're out of their depth and hand things off to a human agent.

But chatbots are just one piece. AI customer support now includes tools that help human agents work faster by suggesting responses as they type, systems that analyze customer emotions from voice tone or text patterns, and platforms that automatically route tickets to the right person based on past performance data.

Core AI capabilities in modern support workflows:

The real work happens through five distinct technologies, each solving a different problem in the support workflow.

Conversational agents with context and memory

Modern chatbots hold actual conversations rather than following rigid scripts. A customer might ask "When will my order arrive?" then immediately follow up with "Can I change the shipping address?" The AI remembers that both questions relate to the same order without making the customer repeat their order number.

This works because the underlying AI models - transformers, the same architecture behind GPT - process language in a way that captures meaning rather than just matching keywords. Pair this with persistent memory and the relevant context, and suddenly you have a system capable of interpreting the nuances of human conversation. In the previous example, the system will recognize that "Where's my package?" and "Track my shipment" are different ways of asking the same question.

Knowledge retrieval to assist human agents

While a support agent types a response, AI scans through past ticket resolutions and internal documentation to surface relevant information. This happens in real-time, appearing as suggestions in the agent's interface without interrupting their flow.

The system learns from which suggestions agents actually use. If an agent consistently ignores certain auto-generated responses but uses others, the AI adjusts its recommendations to match that agent's style and the types of issues they handle most effectively.

Sentiment and speech analysis

Voice analysis examines acoustic properties like pitch changes, speaking pace, and volume to detect emotions. A customer whose voice gets louder and faster is probably getting frustrated. Text analysis looks for linguistic patterns - short sentences, specific phrases like "This is ridiculous," or the absence of pleasantries that usually signal politeness.

When sentiment scores cross certain thresholds, the system can automatically escalate to a senior agent or flag the conversation for a supervisor to monitor. This catches problems before they turn into angry social media posts or worse: churn.

Predictive ticket routing

Traditional ticket assignment used simple rules: billing questions go to the billing team, technical questions go to tech support. AI routing considers dozens of variables at once - which agent has the best resolution rate for API questions, who's currently handling the lightest workload, even what time of day specific agents perform best.

The system picks up on patterns humans wouldn't notice. Maybe one agent resolves database issues 30% faster than average, but only when handling them in the afternoon. The AI learns these nuances and routes accordingly.

Automated bug capture and triage

When customers report software problems, specialized tools automatically collect the technical details developers actually care about. This includes browser console logs, network requests, device specifications, and reproduction steps - everything bundled into a single report that goes straight to the engineering team.

A customer success manager doesn't have to understand what a "CORS error" means. The system captures it automatically and sends it to someone who does.

Quick Comparison Of Leading Platforms

Platform Primary Strength Best For Key AI Features
Intercom Conversational chatbots Mid-market SaaS Intent detection, automated routing
Zendesk AI Agent assist & automation Enterprise teams Answer suggestions, macro recommendations
Forethought Predictive support High-volume operations Ticket deflection, sentiment routing
Ada No-code bot building Non-technical teams Visual flow builder, multi-language
Level AI Voice analytics Call centers Speech emotion detection
Jam Bug reproduction Product & engineering Automatic technical context capture

Top Customer Support AI Tools By Category

The landscape breaks into specialized tools rather than all-in-one platforms that try to do everything.

Chatbots

Intercom's Fin pulls information from help centers, past conversations, and product documentation to answer customer questions. The interesting part? It admits when it doesn't know something rather than making up an answer - critical for maintaining trust.

Zendesk's Answer Bot integrates directly with their ticketing system. It responds to common questions automatically and only creates a ticket when it can't resolve the issue. When it hands off to a human, the full conversation history comes with it.

Agent copilots

Salesforce Einstein surfaces relevant case history while agents work, using natural language understanding to match the current issue with past resolutions. It also generates response drafts that agents can edit before sending, cutting down reply times without sacrificing the personal touch.

Forethought's Assist predicts what a customer wants before they finish typing. An agent writing "I understand you're having trouble with..." might see suggestions like "...accessing your account. Let me help you reset your password" based on the ticket context.

Sentiment analytics tools

Level AI analyzes customer calls in real-time, flagging conversations where sentiment turns negative and alerting supervisors to jump in. It also identifies coaching opportunities by comparing how different agents handle similar calls and highlighting what top performers do differently.

Workflow automation and routing

Capacity's AI routes tickets based on content meaning rather than keyword matching. A message saying "I was charged twice" gets routed to billing even if the customer never used the word "billing," because the system understands the semantic connection.

The automation extends beyond routing. High-value customers get priority queuing, tickets mentioning legal terms get flagged for review, and messages containing specific keywords automatically attach relevant documentation.

Bug reproduction and engineering collaboration

When support teams encounter software bugs, AI customer support tools like Jam capture everything developers ask for in those frustrating back-and-forth email chains: browser console logs, network activity, device specifications, step-by-step reproduction paths. This eliminates the cycle where developers reply "What browser were you using?" and "Can you reproduce this?"

The technical context gets captured with one click, then routed to the appropriate engineering team with all the diagnostic information already attached.

Implementation checklist for a 30-day rollout

1. Map your support workflow

Start by analyzing your last three months of ticket data. Which questions show up most often? Where do agents spend the most time? Which issues get escalated repeatedly?

This audit reveals your best automation opportunities. If 30% of tickets are password resets, that's a prime candidate for AI handling.

2. Audit your existing tech stack

Document every tool your support team currently uses: help desk software, CRM, communication platforms, knowledge bases. Check whether your shortlisted AI tools offer native integrations or if you'll build custom connections.

Pay attention to data flow between systems. If your help desk doesn't sync customer data with your CRM in real-time, that gap might cause AI tools to work with incomplete information.

3. Define success metrics

Establish baseline measurements before implementing anything. Track your current average first response time, resolution time, customer satisfaction score, and ticket volume per agent.

Set realistic improvement targets. A 20% reduction in response time is achievable in the first month, while more ambitious goals like 50% ticket deflection typically take longer as the AI learns from more interactions.

4. Run a controlled pilot

Deploy your chosen AI tool to a subset of your support operation - maybe one product line, one communication channel, or one team. This controlled environment lets you identify issues before they affect your entire customer base.

Monitor the pilot closely for the first two weeks. Watch for AI responses that miss the mark, integration hiccups, or workflow disruptions that training can address.

5. Train agents and monitor quality

Hold hands-on training sessions where agents interact with the AI tools using real scenarios from your ticket history. Focus on working alongside AI - when to let it handle responses, how to edit AI-generated drafts, when to override its suggestions.

Implement quality checks where supervisors review a sample of AI-assisted interactions weekly. This catches drift where the AI might develop unhelpful patterns.

6. Iterate and scale

Based on pilot results, refine your AI configuration before expanding. If the chatbot struggles with specific question types, add those to your training data. If agents ignore certain AI suggestions, investigate why and adjust.

Gradually expand to additional channels, teams, or use cases every two weeks. Full deployment typically takes 60-90 days when done thoughtfully.

Multimodal LLMs for voice and video tickets

The newest AI models process not just text but voice recordings, screenshots, and video demonstrations within the same conversation. A customer can record a 30-second video showing a confusing interface element, and the AI analyzes both what they're saying and what's visible on screen.

This eliminates the translation gap where customers struggle to describe visual problems in words. The AI sees exactly what they see and can respond with annotated screenshots or specific click-by-click instructions.

When customers record a video with Jam, Jam AI titles the ticket and writes the reproduction steps based on the logs, the users' voice and whatever happened on screen.

Proactive support with predictive analytics

AI systems now monitor product usage patterns to identify customers likely to encounter problems before they reach out. If analytics show a user repeatedly attempting a failed action, the system triggers an automated check-in offering help.

This shifts support from reactive to preventive. Instead of waiting for frustrated customers to submit tickets, you're solving problems at the moment of confusion.

Continuous learning from engineering feedback

AI support tools increasingly connect with issue tracking systems to learn from bug resolution patterns. When engineering closes a ticket with specific details about the root cause, that information feeds back into the AI's knowledge base.

This creates a feedback loop where the support AI gets smarter every time developers fix something, gradually building expertise about common product quirks and workarounds.

Challenges and risks to plan for

Data privacy and compliance

AI tools often require access to customer conversations, personal information, and usage data to function effectively. This creates compliance considerations for GDPR, HIPAA, and other regulations depending on your industry and customer locations.

Evaluate each vendor's data handling policies carefully:

  • Data storage location: Where customer information lives and whether it crosses international borders
  • Retention periods: How long the platform keeps conversation logs and personal details
  • Deletion rights: Whether you can remove specific customer records on request

Some platforms offer regional data residency options where information never leaves specific geographic boundaries.

Hallucination and response quality

Large language models sometimes generate plausible-sounding but factually incorrect responses - a phenomenon called hallucination. In customer support, this might mean confidently stating incorrect refund policies or providing wrong technical specifications.

Mitigation strategies include grounding AI responses in verified knowledge bases rather than allowing open-ended generation, implementing confidence thresholds where uncertain responses get routed to humans, and regular audits of AI interactions.

Change management for agents

Support teams often view AI implementation with anxiety about job security. Transparent communication about how AI augments rather than replaces human agents helps, as does involving agents in the pilot process.

The reality is that AI handles repetitive, straightforward issues while escalating complex or emotionally charged situations to humans. This typically makes agent work more interesting by eliminating the tedious parts.

Plus, we believe that with AI, support agents will become product builders, with agency to ship fixes, update help docs, and more — directly impacting the customer experience.

Where AI meets engineering handoffs

The most frustrating support scenarios often involve software bugs that require engineering attention. Traditional workflows create friction: support agents file bug reports, developers ask clarifying questions, agents go back to customers for more details, and the cycle repeats for days.

AI-powered bug capture tools eliminate this friction by automatically collecting technical diagnostics at the moment a customer reports an issue. Console errors, network requests, browser specifications, and reproduction steps get bundled together instantly.

This acceleration matters because every day a bug goes unfixed represents continued customer frustration and additional support volume. When the handoff from support to engineering happens cleanly with complete information, resolution times drop dramatically.

Ship Faster, Support Smarter With Jam

Bug reporting doesn't have to be a bottleneck between your support and engineering teams. Jam captures everything developers ask for - console logs, network activity, device specs, and reproduction steps - with a single click.

When your support team encounters a software issue, Jam automatically bundles all the technical context into a comprehensive report that integrates directly with tools like Jira, GitHub, and Slack.

FAQs about AI Customer Support Tools

How long does it take to see measurable improvements after implementing AI customer support tools?

Most teams notice initial improvements in response times within the first few weeks as chatbots begin handling straightforward queries. More significant gains in resolution rates and customer satisfaction typically emerge after 2-3 months once agents have adapted to AI-assisted workflows and the system has learned from enough interactions.

What happens to sensitive customer data when using AI-powered support tools?

Reputable AI customer support platforms use encryption, data anonymization, and compliance frameworks like SOC 2 and GDPR to protect customer information. However, data handling policies vary significantly between vendors, so organizations verify specific security measures and data residency options before implementation, especially in regulated industries.

Can AI customer support tools integrate with existing help desk systems and migrate historical ticket data?

Most modern AI support platforms offer pre-built integrations with popular help desk systems like Zendesk, Intercom, and Freshdesk, allowing them to access historical ticket data for training and context. The completeness of data migration varies depending on your current setup and the specific tools involved, though APIs typically enable bidirectional data flow.

]]>
<![CDATA[How Wix Builds with AI, with Chief Architect Yoav Abrahami]]>https://strawberryjam.ghost.io/how-wix-builds-with-ai/68f10624020a4c0001000b62Tue, 21 Oct 2025 19:46:52 GMT

Wix powers millions of websites worldwide. We spoke to Yoav Abrahami, their Chief Architect, to learn how his team is integrating AI into the company’s massive engineering org - from internal dev tools to the next generation of its website builder.

Here are some of the highlights from our conversation:

AI as a productivity multiplier, not a replacement

Yoav compares today’s AI tools to a junior developer: they can help the org move faster, but only if guided well.

“AI today is like having another junior developer working for you. You have to explain what you want - and it’ll make the same mistakes a junior developer would.”

Instead of chasing fully automated code generation, Wix treats AI as a force multiplier: useful for repetitive edits, refactors, and documentation - not for architectural decisions.

Yoav recommends builders to start by defining guardrails and context. Expose your internal APIs, schemas, and design systems so AI copilots can reason within your domain.

Building for developer velocity

Wix has over 100 engineers dedicated solely to improving developer velocity: optimizing build times, testing, and code generation. AI is now another layer in that stack.

“Developers are your most expensive and abundant resource. Any minute saved for them compounds across the whole org.”

Yoav thinks engineering leaders should measure where time is lost (builds, tests, onboarding) and deploy AI only where it directly saves developer time.

Experiment relentlessly, measure sentiment

Yoav’s team runs countless AI POCs across testing, monitoring, and code review - keeping what works and scrapping what doesn’t.

“Some ideas work, some don’t. The best metric is developer feedback.”

They combine hard metrics (time to fix bugs, ramp-up time) with soft ones (how engineers feel about using the tools). 

Track both quantitative and qualitative signals. Developer enthusiasm is often the earliest indicator of real productivity gains.

The real challenge: UX, not models

AI integration isn’t just technical, it’s experiential.

“The funny thing is, the hardest part isn’t the AI. It’s the UX. We’re basically back to 1980s interfaces - writing text, getting text back.”

Wix’s AI site-builder, Astro, took a year to design because the team spent a lot of time thinking deeply about the user experience design. 

This highlights a common theme across the conversations we’ve been having with engineering leaders for AI Speedrun. The companies that integrate AI the best focus on how users interact with AI, not just what model is plugged in.

Expect the hype cycle, and prepare for what follows

Yoav is clear-eyed about where we are in the curve:

“We’re in the middle of the code-generation hype. There will be disappointment, then we’ll find the real use cases.”

The companies that derive meaningful value will be the ones that treat AI like any other technology wave: experiment early, measure impact, and iterate through the dip.

The next economy: when AI becomes a distribution channel

Perhaps Yoav’s most prescient insight was about what comes after websites, and how the next distribution channel won’t be search or social, it’ll be AI chat interfaces.

“Search is a $100B economy. Social is another. But there’s no economy yet for AI -no way to spend $50M on AI that drives your business.”

Yoav’s observation aligns with the growth of AEO, or Answer Engine Optimization - where brands compete to be discovered on ChatGPT and Claude. 

As the web shifts from being human-readable to AI-readable, visibility won’t depend on backlinks or keywords but on how well your product, APIs, and documentation are structured for models to understand and retrieve. 

Final thoughts

Yoav’s philosophy is simple: velocity compounds.

“AI won’t solve all your problems - but if you’re not using it, you’re making a big mistake.”

For engineering leaders, that means keeping AI grounded in measurable gains: faster feedback loops, cleaner handoffs, and a better development flow. 

We had a really great time jamming with Yoav! You can catch our full conversation on YouTube, alongside our previous conversations with product and engineering leaders at Intercom, Monday.com, Vercel, and more.

]]>