Delivery Zone Planning
How my research helped identify the transparency problem of our AI product

Logisitics
Decision support system
User Research
Summary
Challenge
My role
Result
What I've learnt
Context
GHN operates multiple hubs and spokes across Vietnam, functioning as order distribution centers. Orders travel from senders through these hubs to a final spoke, where they’re ready for last-mile delivery.
Each final spoke manages multiple trucks, each truck covers a designated delivery zone comprised of several nearby areas (e.g., districts or wards)
The process of selecting and grouping areas into zones is called “delivery zone planning”.
Since the order volumes in each area constantly fluctuates, the optimal area grouping must adapt dynamically. The goal is to maximize the number of orders delivered by each truck while ensuring the truck’s capacity is fully utilized, avoiding both overloading and underutilization.
5 dedicated persons - 5 Regional Managers, are responsible for the task. Periodically, they review and re-plan the zones.
Example: Spoke A manages three delivery zones: Red, Blue, and Green, each serviced by a dedicated truck.
• The Red Zone consists of District 1, District 4, An Khanh Ward, and Thu Thiem Ward.
• The Blue Zone comprises only three wards due to the high order volume in this area.
Challenge
We are asked to create an decision support system that could assist the Regional Managers with delivery zone planning.
Research
Similar to Demand Forecast project, I help with researching & analyzing how 5 Region Managers are currently performing the zone planning task.
The insights serve as input for the data team to come up with an algorithm/rules for automating the task.
Our first MVP is a suggestion system, designed firstly for Ho Chi Minh City. Users can input their requirements (e.g., select historical data to be used for calculations), click a button, and then view the zone suggestions generated by the system.
However, we knew that this very first version was not reliable enough to put into real usage, as we not yet found an optimal decision making process / algorithm, and also due to the lack of many critical data (eg., truck capacity fill rate).
While the data and development teams continued their work, I decided to test this MVP out to see how the users would respond to the product’s concept.
I conducted a group interviews, where I met with all the Regional Managers, allowed them to use the MVP to generate a zone plan for Ho Chi Minh city, then asked them to rate the system’s suggestions, while also explaining their reasoning behind the ratings.
the goal was to study
How users assess the system’s suggestions: What criteria do they use to evaluate its effectiveness?
How users perceive the system’s role in delivery zone planning: Do they see it as a trustable assistant, and would they integrate it into their process?
Findings
The findings from this research amazed me:
From the previous research findings, I was already aware of the process users followed to create zone plans, including the factors and criteria they considered. However, it wasn’t until now that they revealed other ad-hoc factors they took into account during planning (e.g., the time-consuming challenge of locating addresses in certain rural areas).
But what amazed me most was their (unexpected) perception of the system:
We asked them to rate each zone suggestion one by one as either “can apply (all or partly)” or ”cannot apply at all”. The final “can apply” rate was only ~50%. Despite this, at the end when we asked for their overall opinion on the system’s performance, they still believed that it had done a very good job and put together better plans than they could have.
Surprisingly, they didn’t question the system’s decision-making process or how it worked, all 5 of them, not once. Instead, they assumed there was some inexplicable “magic” behind the product that made it as intelligent as a human. As one remarked, “That’s how AI works, isn’t it?”
And lastly, they believed it would completely replace their role someday.
the insights
As the creators, we were fully aware that our “AI product” at that stage was far from “intelligent as a human”. The approach was rule-based AI, since the company could not afford the risk of a black-box mechanism for such a huge-impact decision. And at that time the rules were too simple, the system couldn’t even replicate half of the decision-making process of the users.
What we were not aware, however, was the perception of the users toward our product. Despite being high-level managers (they are Regional Managers!), when it comes to ‘advanced technology’ which they don’t fully understand, they approached it in a very unusual way. Their tendency to overestimate the system’s capabilities posed significant risks to the final outcome.
This realization brought to light a critical issue in our interaction design: Lack of transparency of how the systems works & its limitations.
Takeaways
Based on research findings, we’ve come to prioritize transparency, explainability, and trust in our AI product — elements that are just as critical as the underlying algorithms driving decision-making, but previously overlooked.
In the context of solving this “grouping areas into zones” problem, effective collaboration between the still-limited AI and human users (regional managers) is essential. For this partnership to work, users must clearly understand how the AI operates, its strengths, and how their expertise can complement the system’s limitations.
Transparency in AI products in business context is just as important as in societal context. Lacking of this quality could negatively affect the collaboration between the AI-human, as well as the final business outcome.