AI-powered geographic expansion
Overview
A US-based gardening & farming app with solid crop planning, layout, and management capabilities makes recommenations based on decades of US growing zone data. While reliable, this limited the app's reach to the US.
We used AI integrations to expand the app's capabilities internationally, and improve functionality for all users.
It was a success.
The problem
The app accepted any location, but its recommendations assumed a US growing zone. A gardener in the Netherlands or New Zealand would need to do research on which US growing zone most closely matches their climate, and could still sometimes get the wrong recommendation.
The client's dataset only covered US zones, with no viable international equivalent. The timeline and budget didn't allow for integrating or building another dataset.
Would we be able to get reliable and consistent recommendations from an AI integration instead?
The approach
I designed three features, each targeting a different moment where growing zone data was involved. I then researched and tested prompting strategies, designed the business logic, and specified the JSON response formats. The LLM would do the heavy lifting in terms of recommendations. The projected running costs were reasonable.
Location-to-zone mapping. During the garden setup, gardeners can now select any location in the world. Behind the scenes, the AI maps it to the nearest US growing zone. If no close match exists, the default behavior becomes to defer to AI-powered suggestions for seeding and companion planting as well (see below).
Seeding date recommendations. In the planting calendar, gardeners can request an AI-generated optimal seeding date when adding a crop. Besides covering the "no close match" case above, this also addresses a more general nuance where the "optimal" date has passed for the current season, but there are still "good" dates available.
Companion planting suggestions. In the garden layout view, gardeners can click any planting and ask what grows well alongside it, and what doesn't. They get suggestions and explanations, calibrated to their growing zone or location.
The engineering was straightforward once the business logic was right. The significant work was in designing a prompting strategy that would produce good and consistent results without making token spend a problem.
The projected running costs were reasonable.
The outcome
30 days from idea to beta rollout, with an additional 30 days for calibration and polishing.
Roughly 90% of active users tried an AI feature at least once. Community feedback was mostly positive, with over 60% of users using the features regularly.
With friction reduced for global markets, the userbase achieved double-digit growth within 90 days, without a meaningful increase in marketing spend.
What I'd do differently
I would have put more time into more granular analytics: did users then edit the recommended dates? Did they generate them again? Were adopters generally AI-bullish, or was one specific feature important to them?
I'd also invest in a more complete caching and feedback loop. We had to mitigate for LLMs giving different answers to the same question. A better, if more complex approach would be to look at usage signals like rate of acceptance, modification, or ignore. In time, this would create a dataset of validated responses, reducing token costs.