LLM-Powered HMI Design for FUSO Heavy-Duty Trucks
A complete reference for UI/UX designers working on digital instrument clusters for the Mitsubishi Fuso Super Great, eCanter, and the incoming ARCHION product family. From zero LLM knowledge to expert-level workflow integration.
Chapter 01The FUSO Product & Platform Landscape (2026)
Mitsubishi Fuso Truck and Bus Corporation (MFTBC), a subsidiary of Daimler Truck AG, manufactures commercial vehicles across light-, medium-, and heavy-duty segments primarily for the Japanese and Asian markets. As of March 2026, the HMI designer's product scope spans four distinct vehicle lines with significantly different powertrain, display, and interaction requirements.
Super Great (Heavy-Duty)
The flagship. Powered by the 6R30 12.8L inline-6 diesel with SHIFTPILOT 12-speed AMT. Comprehensive ADAS suite: ABA6 (Active Brake Assist 6), Active Sideguard Assist 2.0, FBIS (Front Blindspot Information System), Active Attention Assist, and LDW. Exhaust aftertreatment: BlueTec SCR + DPF. Telematics: Truckonnect. Right-hand drive for Japan market — critical for zone placement analysis.
eCanter (Light-Duty Electric)
All-electric with modular battery packs (S/M/L configurations). The cluster must handle unique EV states: state-of-charge (SOC), estimated range (eRange), recuperation levels, charging status (AC L1/L2/DC), battery temperature warnings, cell balancing indicators, and V2X readiness. No tachometer — the power/regen gauge replaces it.
Fighter & Canter
Medium- and light-duty diesel platforms. Fighter shares many ADAS systems with Super Great at a lower tier. Canter includes diesel and mild-hybrid variants. Both use simpler cluster configurations but share the core design system with the heavy-duty line.
Some LLM outputs may state the Super Great display is "12.3-inch 1920×720." MFTBC does not publish detailed display specifications publicly. No Japanese heavy-duty OEM (FUSO, Hino, Isuzu, UD Trucks) makes this data available in press materials. Treat all LLM-generated display specs as unverified until confirmed against internal documentation.
Test whether your LLM knows the basics of the FUSO product line.
What to check: Compare the output against this chapter. Note which facts the LLM gets right, which it hedges on, and which it gets wrong — this calibrates your trust level.
Chapter 02Regulations, Safety & Display Architecture
Your instrument cluster design must comply with multiple overlapping regulatory layers. Understanding which are mandatory versus advisory is critical for prioritizing design decisions.
| Standard | Scope | Key HMI Requirement |
|---|---|---|
| ISO 15005:2017 | Dialogue principles | Minimize visual demand; 1.5s single-glance budget (per ISO 16673 occlusion method) |
| ISO 26262:2018 | Functional safety | ASIL B: 200ms telltale display latency, 100ms unmotivated telltale response |
| ISO 2575:2021 | Symbols & colors | Standardized telltale symbols, mandated colors (red/amber/green) |
| ISO 15008:2017 | Character height | Minimum character height and contrast ratios at viewing distance |
| UNECE R121 | Telltale identification | Mandatory telltale symbols and placement rules |
| Euro NCAP 2026 | Safety rating | Physical controls mandate — turn signals, hazard lights, horn, wipers, SOS must not be display-only |
Functional Safety — ASIL B Pipeline
The instrument cluster is classified as ASIL B under ISO 26262. This classification defines hard technical constraints on your display pipeline.
Glance-Time Budget
SAE J941's Class B (heavy truck) eye ellipses have not been updated since 2010 and don't account for height-adjustable seats. Treat J941-based cluster placement as approximate.
Display Hardware & CAN Architecture
The instrument cluster receives data from dozens of ECUs via the vehicle's CAN bus network. Understanding this pipeline is essential — it determines what data you can display, how fast it arrives, and what safety constraints apply.
All CAN signal names generated by LLMs (e.g., BMS_StateOfCharge, DPF_RegenStatus) are illustrative placeholders. Real signal names come from the vehicle's DBC file, which is proprietary. Always verify against your actual DBC before using in specifications.
Have the LLM draft a regulatory compliance matrix for a display element you're currently working on.
What to check: Cross-reference every ISO clause number the LLM cites against the actual standard documents. Fabricated clause numbers are a common hallucination.
Chapter 03ADAS, Powertrain & Telltale Design
The 2024+ Super Great carries one of the most comprehensive ADAS suites in the Japanese heavy-duty segment. The cluster must handle warning escalation across multiple systems while maintaining clear visual hierarchy.
ABA6 Warning Escalation
Active Sideguard Assist 2.0 monitors blind spots on both sides during turns and lane changes. Three-stage escalation at ≤20 km/h: visual alert → audible alarm + A-pillar lamp → damage mitigation brake. When multiple ADAS systems trigger simultaneously, prioritize by criticality — emergency interventions (ABA full brake, ASA emergency stop) take absolute priority.
Cross-Powertrain HMI Architecture
Diesel (Super Great, Fighter, Canter): DPF regeneration has 4 states: inactive, passive regen, active regen request (pull over), DPF full (derate imminent). AdBlue uses 3-level depletion warning: advisory (>500km), warning (100-500km), critical (<100km — torque derestriction). Both follow ISO 2575 color conventions.
EV (eCanter): The cluster replaces the tachometer with a power/recuperation gauge. Key unique states: SOC with range estimation, charge port status with time-to-full, cell balancing indicator, thermal management warnings, V2X pre-conditioning.
Hydrogen (Hino Profia Z FCV — ARCHION context): Uses 6×70MPa tanks with Toyota MIRAI-derived fuel cell stacks. HMI must display: tank pressure per bank, FC stack temperature, hydrogen leak warning (safety-critical), dual-source range estimation.
Competitive Landscape
Japanese heavy-duty OEM competitors: Hino Profia (now within ARCHION), Isuzu Giga, UD Trucks Quon (Isuzu Motors). European benchmarks: Mercedes-Benz Actros, Volvo FH, Scania S-Series, DAF XG+. None publish detailed display specifications publicly — making LLM-assisted image analysis particularly valuable for building competitor benchmark libraries.
Competitor Benchmark Matrix (Template)
Use an LLM with competitor press photos (see Screenshot Workflows in Chapter 05) to fill this matrix. The empty cells are your research task.
| Feature | FUSO Super Great | Hino Profia | Isuzu Giga | Actros | Volvo FH |
|---|---|---|---|---|---|
| Display type | — | — | — | — | — |
| ADAS visualization | — | — | — | — | — |
| Warning escalation style | — | — | — | — | — |
| Information hierarchy | — | — | — | — | — |
| Night mode approach | — | — | — | — | — |
| EV-specific UI (if applicable) | — | — | — | — | — |
Use image search or manufacturer press kits to find cluster photos for 2-3 competitors. Paste them into your LLM with this prompt:
What to check: Verify at least 3 specific claims against manufacturer press materials or automotive press reviews. LLM image analysis is a starting point — not a finished deliverable.
Generate a warning escalation specification for an ADAS system.
What to check: Verify the color assignments match ISO 2575 conventions (green=normal, amber=caution, red=danger). Check that text strings are actually under 24 characters — count them.
Chapter 04What LLMs Are & How They Work
A Large Language Model (LLM) is a type of AI trained on enormous amounts of text to learn patterns of human language. Given input text, it predicts what should come next. Scale this to billions of parameters and trillions of training words, and next-word prediction becomes surprisingly capable: writing prose, generating code, analyzing documents, and reasoning through multi-step problems.
Think of it as a brilliant, well-read colleague who has never worked at your company, doesn't know your specific situation, and occasionally makes things up with complete confidence. The more context you give, the better the output.
An LLM has no access to your internal FUSO design system, your Figma files, or your company's proprietary CAN database — unless you provide that information in your prompt. The model's knowledge has a training cutoff date. It may know about ISO 26262:2018 but it has never seen MFTBC's internal specifications.
How We Got Here
The Context Window
LLMs process text as tokens — sub-word pieces (~1 token ≈ 0.75 words). Everything you send — your prompt, pasted specs, and the model's response — must fit inside the model's context window. A full DBC file might be 50,000+ tokens. A single ISO standard section might be 10,000 tokens. Current windows range from 4K tokens (GPT-3.5) to 200K tokens (Claude).
Temperature: Controlling Creativity
Temperature controls randomness. Low values produce deterministic output; high values introduce variety. Most models default to ~0.7.
| Task | Temperature | Why |
|---|---|---|
| Spec generation, state matrices | 0 – 0.2 | Deterministic, consistent output |
| ISO compliance checks | 0 | No creativity — factual matching |
| CAN signal extraction, DBC parsing | 0 | Precise data extraction, no embellishment |
| Alert text copywriting | 0.3 – 0.5 | Some variety, still controlled |
| Brainstorming layout alternatives | 0.7 – 0.9 | Explore diverse approaches |
| Icon metaphor exploration, naming | 0.8 – 1.0 | Maximum creative range |
Fundamental Limitations
| Limitation | What It Means For Your Work | How To Mitigate |
|---|---|---|
| Hallucination | Generates plausible but false info — e.g., fabricated ISO clause numbers, invented CAN signal names | Verify every safety-critical claim against primary sources |
| Knowledge cutoff | Doesn't know events after its training date — may miss latest ARCHION developments | Provide current information in your prompt or use web-search-enabled models |
| No persistent memory | Each conversation starts blank — your design system context is lost between sessions | Use system prompts and save/reload key context (see Ch08 WSCI framework) |
| Sycophancy | Agrees with you even when you're wrong — dangerous for safety-critical specs | Ask neutral questions; explicitly invite disagreement |
| Inconsistency | Same prompt, different results each time — problematic for reproducible specs | Set temperature to 0; run critical prompts 3x and compare |
| Math errors | Can miscalculate pixel dimensions, character counts, or timing values | Ask the model to write code for calculations rather than doing mental math |
▸ Under the Hood: How LLMs Actually Work (Optional Deep Dive)
Tokenization
The model breaks your text into tokens — sub-word pieces. "instrument cluster" becomes ["instrument", " cluster"]. "eCanter" might become ["e", "C", "anter"]. Each token gets a numeric ID.
Attention — How LLMs Understand Context
For each token, the model computes how much to "focus on" every other token. When processing "The brake warning activated because it sensed danger," attention figures out "it" refers to the brake warning — not the danger. It matches Queries against Keys to produce scores, then gathers information from Values. Think of it like a library: the Query is your search question, the Key is each book's spine label, and the Value is the content inside.
The Training Pipeline
Test the practical limits described in this chapter with a real task.
What to check: Did the model express appropriate uncertainty, or did it state everything with full confidence? This tests the hallucination and sycophancy risks from this chapter.
Chapter 05Models, Tiers & Capabilities
Not all LLMs are equal. Models exist on a spectrum of capability, speed, and cost — and choosing the right tier for the right task is a core skill. Using a flagship model for every quick question wastes time and money; using a small model for complex specification work produces unreliable results.
The Three Tiers
| Tier | Examples | Strengths | Best FUSO Use Cases |
|---|---|---|---|
| Flagship | Claude Opus, GPT-4o, Gemini Ultra | Deep reasoning, nuance, long context, complex multi-step tasks | ISO compliance analysis, cross-system state matrix generation, HMI architecture planning, spec review with edge cases |
| Mid-tier | Claude Sonnet, GPT-4o-mini, Gemini Flash | Good reasoning at faster speed, strong instruction following | Prompt iteration, alert copywriting, competitor analysis summaries, documentation drafts |
| Small / Fast | Claude Haiku, GPT-3.5 | Very fast, low cost, good for simple structured tasks | CAN signal name formatting, quick translations, boilerplate text, simple lookups |
Start with the smallest model you think might work. If the output quality isn't sufficient, move up one tier. This is faster and cheaper than always defaulting to flagship. Most daily tasks — copywriting alert messages, reformatting tables, generating boilerplate — work perfectly on mid-tier models.
Multimodal Capabilities
Modern flagship and mid-tier models can see images. This unlocks powerful HMI workflows:
- Screenshot analysis: Paste a cluster screenshot and ask "Does this layout satisfy ISO 15005 glance-time constraints?"
- Competitor benchmarking: Upload photos of Actros, Volvo FH, and Scania clusters and ask for a structured comparison of information hierarchy
- Icon evaluation: Share a telltale icon set and ask "Which symbols might be confused at 1.5-second glance time?"
- Handwritten sketches: Photograph a whiteboard sketch and ask the model to convert it into a structured layout specification
Screenshot & Image Workflows
For a UI/UX designer, pasting screenshots directly into the LLM is the single highest-ROI multimodal workflow. You can get design feedback, compliance checks, and competitor analysis from images alone — no specification documents needed.
Figma Screenshot → Layout Feedback
Take a screenshot of your current cluster layout in Figma. Paste it into Claude or ChatGPT with this prompt:
Press Photos → Structured Comparison
Download press photos of competitor clusters (Actros, Volvo FH, etc. from manufacturer press kits). Paste 2-3 images into a single prompt:
Telltale Icon Set → Legibility Assessment
Export your telltale icon set from Figma as a single image. Paste and ask:
LLMs cannot measure actual pixel dimensions, calculate true contrast ratios, or simulate peripheral vision. Image-based feedback is qualitative, not quantitative. Use it for early-stage review and brainstorming, not as a substitute for proper usability testing or instrument-grade photometric analysis.
Model-Specific Behavior
Different model families have distinct personalities. Claude tends toward careful, nuanced responses and will express uncertainty. GPT models tend toward confident, comprehensive outputs. Neither approach is inherently better — but awareness helps you calibrate your verification effort.
Claude (Anthropic)
- Tends to flag uncertainty rather than guess
- Strong at following complex multi-part instructions
- Will push back if asked to do something it considers harmful
- Very large context windows (200K tokens)
- Cowork mode for autonomous file/browser tasks
GPT-4 / ChatGPT (OpenAI)
- Strong code generation and structured output
- More likely to produce confident answers even when uncertain
- Good at creative tasks and brainstorming
- Extensive plugin/tool ecosystem
- Image generation via DALL-E integration
Context Window Sizes
The context window determines how much information you can include in a single conversation. For HMI specification work, this matters enormously — a full CAN DBC file, multiple ISO standard excerpts, and your design system documentation can easily exceed 100K tokens.
Just because a model can accept 200K tokens doesn't mean you should always fill that window. Research shows models perform best when key information appears near the beginning or end of the context. Critical requirements buried in the middle of a massive prompt may be overlooked. Be strategic about what you include.
Compare model tiers on the same task to build intuition for when to use which.
What to check: Run this on a small model (Haiku/GPT-3.5) and a flagship model (Opus/GPT-4). Compare: did the small model miss edge cases? Did it handle the signal-lost fallback? This calibrates your model selection.
Chapter 06Prompt Engineering for FUSO HMI
Prompt engineering is the skill of communicating with LLMs effectively. A well-structured prompt is the difference between a vague, generic response and a precise, actionable output tailored to your specific FUSO HMI design task. This is the single most important skill to develop.
The Five Foundational Principles
Putting Principles Into Practice
The five principles above describe what makes a good prompt. Here's the concrete checklist for how to build one:
Be Specific
Not "design a dashboard" but "design a DPF regen warning for the Super Great 2025, amber per ISO 2575, RHD Japan market, readable within 1.5-second glance at 85 km/h."
Provide Context
Paste the relevant CAN signals, display dimensions, and ISO clauses the model needs. Example: include the DPF-related signals from your DBC file and the ISO 2575 color mandates for caution states.
Define Output Format
"Respond as a markdown table" / "Return as JSON" / "Structure as a Figma component spec with layers, colors, and spacing values."
Assign a Role
"You are a senior automotive HMI designer familiar with ISO 26262 ASIL B constraints and Japanese commercial vehicle regulations for the right-hand drive market."
Iterate
The first output is a draft, not a final answer. Refine with specific feedback: "make the DPF status icon more prominent," "add the signal-lost fallback state," "shorten the warning text to under 20 characters."
The Prompt Skeleton
Role: Who should the AI act as?
Context: Vehicle, display, CAN signals, standards
Task: What exactly should it produce?
Format: Table, JSON, prose, Figma spec?
Constraints: Glance time, ASIL level, RHD, character limits
Examples: (Optional) Show the pattern you want
Prompt Anatomy — A Real FUSO Example
Here's how the skeleton maps to a real-world prompt:
Context: I'm designing the DPF regeneration warning flow for the Mitsubishi Fuso Super Great's 12-inch digital cluster. The 6R30 diesel engine uses a DPF (Diesel Particulate Filter) that requires periodic regeneration. There are 4 states: inactive, passive regen (automatic), active regen request (driver must pull over), and DPF full (torque derate imminent).
Task: Create a complete DPF regeneration display specification covering all 4 states. For each state, define:
1. Telltale icon description and color (per ISO 2575)
2. Accompanying text string (max 2 lines, max 24 characters per line)
3. Display zone and priority level
4. Animation behavior (static, pulse rate, flash rate)
5. Escalation trigger (what causes transition to next state)
Constraints: All elements must be readable within the ISO 15005 1.5-second glance budget. The active regen request must use the highest available non-safety-critical priority. Colors follow ISO 2575: green=normal, amber=caution, red=danger.
Format: Return as a markdown table with one row per state.
Six Prompt Patterns for Daily HMI Work
The Spec Generator
"Given [vehicle model], [powertrain], and [applicable standards], generate a complete [component] specification including [specific fields]. Format as [table/JSON/markdown]."
The Compliance Checker
"Review this [design/specification] against [ISO standard]. For each element, state whether it meets the requirement, cite the specific clause, and flag any violations with suggested corrections."
The State Matrix Builder
"For the [system name] with states [list states], create a state transition matrix. Include: trigger conditions, display changes at each transition, priority level, and fallback behavior if the CAN signal is lost."
The Copywriter
"Write [number] alert text variants for [warning type] on the [vehicle] cluster. Each must be under [character limit], use [language], and follow the [tone] voice. Rate each variant for clarity at highway-speed glance time."
The Layout Analyst
"Analyze this cluster screenshot. Identify the information hierarchy, estimate glance-time compliance per zone, and suggest improvements for [specific concern]. Consider that this is a right-hand drive vehicle for the Japan market."
The Competitor Benchmarker
"Compare the [competitor vehicle] instrument cluster against our [FUSO model] design for [specific aspect]. Structure the comparison as: feature, their approach, our approach, advantage/gap, recommendation."
Few-Shot Prompting — Teaching By Example
When zero-shot doesn't produce the format or quality you need, show the model what you want with one or two examples:
Chain-of-Thought — Step-by-Step Reasoning
For complex decisions where you need the model to show its work, ask it to reason step by step. This produces higher-quality analysis because the model can catch its own errors mid-reasoning.
Japan market = right-hand drive. The driver's primary viewing angle is mirrored compared to LHD markets. When asking an LLM about display zone placement, always specify "right-hand drive, Japan market." Without this, the model defaults to LHD assumptions from its predominantly Western training data, placing critical information in the wrong zones.
Transform a vague prompt into a specific one using the skeleton from this chapter.
What to check: The strong version should produce output you can use with minimal editing. If it still requires heavy rework, your context or constraints need more specificity.
Chapter 07Worked Example: eCanter Charging Status Display
This chapter walks through one complete design task from start to finish — prompt, output, verification, iteration, final spec. Every technique from the preceding chapters is applied in sequence.
The Brief
You need to design the charging status display for the eCanter's digital cluster. The display must show: charging state (not connected, AC charging, DC fast charging), current SOC percentage, estimated time to full charge, and charging power in kW. It must work for all three battery configurations (S/M/L: 41/83/124 kWh). The display will be visible when the vehicle is stationary and connected to charging infrastructure, but it also needs a resting state for when no charger is connected. All text, colors, and iconography must comply with ISO 2575 and meet the 1.5-second glance-time budget defined by ISO 16673.
First Prompt Attempt
Using the prompt skeleton from Chapter 06, we construct a detailed first prompt. This is a realistic first attempt — thorough but not yet accounting for every edge case.
Context: I'm designing the charging status display for the Mitsubishi Fuso eCanter digital cluster. The eCanter is a fully electric light-duty truck available in three battery configurations: S (41 kWh), M (83 kWh), and L (124 kWh). It supports both AC charging (Type 2, up to 22 kW) and DC fast charging (CCS2, up to 104 kW). The charging display appears on the instrument cluster when the vehicle is stationary and the charging port is engaged.
Task: Create a complete charging status display specification. Define the following charging states: Not Connected, AC Charging, DC Fast Charging, and Charging Complete. For each state, specify:
1. Telltale icon description and color (per ISO 2575)
2. Primary text string (max 24 characters per line)
3. Secondary information displayed (SOC%, time remaining, power in kW)
4. Animation behavior (static, pulsing, or dynamic fill)
5. Display priority level
Constraints:
— All text must be readable within a 1.5-second glance (per ISO 16673 occlusion method).
— Colors must follow ISO 2575: green = normal/OK, amber = caution, red = danger, white = neutral information.
— Text strings must not exceed 24 characters per line for glance-time compliance.
— The specification must work identically across all three battery configurations (S/M/L); only the numeric values change.
— Right-hand drive layout (Japan market).
Format: Return as a markdown table with one row per charging state. Columns: State, Icon, Color, Primary Text, Secondary Info, Animation, Priority.
Raw LLM Output
The model returns a structured table. At first glance it looks solid — correct format, reasonable content, professional language. But three errors are embedded in the response.
Verification Pass
Now we apply the verification checklist from Chapter 10. This is the expert's most critical contribution — the model cannot reliably self-verify against domain-specific standards.
1. ISO 2575 Color Violation — "Charging Complete" uses blue.
Checking color assignments against ISO 2575: the "Charging Complete" state uses blue, but ISO 2575 mandates green for normal/OK states. A completed charge is a positive, normal condition — it must be green, not blue. Blue is not part of the ISO 2575 standard color vocabulary for vehicle telltales. The model likely borrowed this from consumer electronics conventions (smartphones, laptops) where blue often indicates "full." This is exactly the kind of domain bleed that Chapter 04 warned about.
2. Text String Exceeds 24-Character Limit — "DC Fast Charging in Progress" = 28 characters.
Counting characters in each primary text string: "No Charger Connected" = 20 chars (OK), "AC Charging" = 11 chars (OK), "DC Fast Charging in Progress" = 28 chars (FAIL — exceeds the 24-character constraint by 4 characters), "Charge Complete" = 15 chars (OK). The model added "in Progress" for clarity but violated the explicit constraint. Needs to be shortened to 24 characters or fewer — e.g., "DC Fast Charging" (16 chars).
3. Missing Edge Case — No "Charging Interrupted" state.
The specification covers the happy path only. There is no state for a charging session that is unexpectedly interrupted — cable disconnection during charge, EVSE fault, ground fault, communication error between the vehicle and charger. In the real world, charging interruptions are common and the driver must be clearly informed. A "Charging Interrupted" state with an amber caution indicator is required. The model was not asked for it explicitly, but an expert knows it is essential for any production charging display.
Iteration Prompt
We send a targeted follow-up correcting all three errors and adding the missing state. Notice: we don't re-explain the entire context. We reference the prior output and provide specific corrections.
1. Color fix: "Charging Complete" must use green, not blue. ISO 2575 does not include blue in the standard telltale color set. A completed charge is a normal/OK condition = green.
2. Text length fix: "DC Fast Charging in Progress" is 28 characters — exceeds the 24-character-per-line constraint. Shorten it to 24 characters or fewer while keeping it clearly distinct from the AC state.
3. Missing state: Add a "Charging Interrupted" state for unexpected disconnection, EVSE faults, or communication errors. This should use amber (caution per ISO 2575), include a prompt to check the connection, and have a higher priority than the normal charging states.
Please regenerate the complete table with all corrections applied.
Corrected Output
The model returns a revised specification with all corrections applied.
Devil's Advocate
Before finalizing, we use the devil's advocate technique to stress-test a design decision. We pick one choice that seems reasonable and ask the model to argue against it.
Final Specification
After verification, iteration, and adversarial review, we have a production-ready specification. The devil's advocate pass confirmed the color decision is correct per ISO 2575 but surfaced the fallback scenario, which we address by adding a note about kW display prominence.
eCanter Charging Status Display — Final Specification
Applies to all battery configurations: S (41 kWh), M (83 kWh), L (124 kWh). All numeric values are dynamically computed; display logic is identical across configurations.
| State | Icon | Color | Primary Text | Secondary Info | Animation | Priority |
|---|---|---|---|---|---|---|
| Not Connected | Plug outline, disconnected | White | No Charger Connected | SOC: [xx]% | Static | Low |
| AC Charging | Plug with AC wave symbol | Green | AC Charging | SOC: [xx]% · [x]h [xx]m remaining · [xx.x] kW | Slow pulse (1 Hz), battery fill animation | Medium |
| DC Fast Charging | Plug with lightning bolt | Green | DC Fast Charging | SOC: [xx]% · [x]h [xx]m remaining · [xxx.x] kW | Fast pulse (2 Hz), rapid battery fill animation | Medium-High |
| Charging Interrupted | Plug with exclamation triangle | Amber | Charging Interrupted | SOC: [xx]% · Last power: [xx.x] kW | 2 Hz flash | High |
| Charging Complete | Plug with checkmark | Green | Charge Complete | SOC: 100% | Static | Low |
Character count verification: No Charger Connected (20) · AC Charging (11) · DC Fast Charging (16) · Charging Interrupted (20) · Charge Complete (15). All within 24-character limit.
Design note: The charging power value (kW) should be displayed at a minimum font size of 32 dp to serve as a glance-time-compliant differentiator between AC and DC modes, addressing the fallback scenario identified during adversarial review.
Takeaway
This worked example used five techniques in sequence: (1) structured prompt with the Role/Context/Task/Constraints/Format skeleton, (2) domain-expert verification against ISO 2575 and glance-time standards, (3) targeted iteration with specific corrections, (4) devil's advocate adversarial review, and (5) final specification lockdown with character-count verification. This is the workflow. Every HMI design task through an LLM should follow this pattern — or a deliberate subset of it when time is constrained.
Chapter 08Advanced Techniques & Context Engineering
Beyond basic prompting, advanced techniques let you extract consistently higher-quality results, manage long design sessions, and tackle complex multi-part HMI specifications that would be difficult with single prompts.
Self-Consistency Through Multiple Passes
For critical specifications, don't rely on a single model response. Run the same prompt 3 times and compare results. Where all three agree, you have high confidence. Where they diverge, that's exactly where you need human expert judgment. This technique is especially valuable for:
- Warning priority rankings — does the model consistently rank ABA6 above DPF warnings?
- Glance-time estimates — are the timing recommendations consistent?
- Color assignments — does it always map to ISO 2575 correctly?
Reflection & Self-Correction
Ask the model to review its own output before you accept it. This simple technique catches errors the model would miss in a single pass.
Devil's Advocate Technique
Force the model to argue against its own recommendation. This surfaces edge cases and risks you might not have considered.
Context Engineering — Managing Long Sessions
In extended design sessions, the conversation grows and the model's earlier context gets compressed or lost. This is context drift — the model gradually "forgets" constraints you set at the beginning. Symptoms:
- The model stops applying your design system rules
- Outputs start contradicting earlier specifications
- Format consistency degrades
- The model forgets which vehicle model or powertrain you're designing for
The WSCI Framework
Four strategies for managing context effectively — think of them as memory management for LLMs:
Write — Save for Later
Persist key decisions outside the conversation. Save successful prompt templates, finalized specifications, and design decisions to files. Retrieve them when starting new sessions instead of re-explaining everything.
Select — Pull Only What's Relevant
Don't paste your entire DBC file. Extract the 15 signals relevant to the current display zone. Include only the ISO clauses that apply to this specific task. Surgical context selection beats brute-force dumping.
Compress — Summarize Long Context
Before a long conversation drifts, summarize what's been decided: "So far we've defined: DPF has 4 states, amber/red per ISO 2575, 200ms render budget. Now let's tackle the animation timing."
Isolate — Split Complex Tasks
One massive prompt handling telltales + layout + state machines + translations will underperform four focused prompts, each with its own tailored context. Break your HMI spec into sub-tasks.
System Prompts — Your Persistent Persona
A system prompt sets the model's behavior for the entire conversation. For FUSO HMI work, a well-crafted system prompt eliminates repetitive context-setting.
You are an expert UI/UX designer for Mitsubishi Fuso heavy-duty
truck instrument clusters. You work within these constraints:
Vehicle: Super Great (6R30 diesel, SHIFTPILOT 12-speed AMT)
Market: Japan (right-hand drive)
Standards: ISO 15005 (1.5s glance), ISO 26262 ASIL B (200ms
telltale latency), ISO 2575 (symbol colors), UNECE R121
Display: Digital cluster, 200ms render budget
Protocol: CAN/CAN FD (J1939), CRC-protected messages
Rules:
- All telltale colors follow ISO 2575 (red/amber/green)
- All text fits within 1.5-second glance budget
- Right-hand drive zone placement (mirror LHD assumptions)
- Flag any specification as UNVERIFIED if based on general
knowledge rather than confirmed FUSO data
- CAN signal names are illustrative — always note this
Context Window Strategy
Think of your context window as a budget. Here's how to spend it wisely:
Too little: "Design a warning." → Generic, useless output.
Too much: Pasting 5 complete ISO standards + entire DBC file + full design system doc → Model loses focus, key requirements get buried.
Just right: Relevant ISO clauses + applicable CAN signals + specific design constraints + clear question → Precise, actionable output.
Use the reflection pattern to catch errors in a previous LLM output.
What to check: Did the reflection actually catch real issues, or did it just say "looks good"? If it found nothing, deliberately introduce an error and re-run to verify the technique works.
Chapter 09Claude Cowork for HMI Design
Claude Cowork (launched January 2026) represents a shift from conversation to autonomous action. Instead of asking Claude questions and receiving text answers, Cowork can independently browse the web, read and write files, execute multi-step research workflows, and iterate on its own results — all while you continue other work.
The Autonomy Slider
Think of AI assistance as a sliding scale, not an on/off switch:
For FUSO HMI design work, the Cowork position is the sweet spot. The model works autonomously on well-defined tasks while you retain oversight and final approval. Full agent mode is for development and testing workflows, not safety-critical specification work.
What Cowork Can and Can't Access
Cowork CAN
- Browse public websites and press archives
- Read files you upload or paste into the conversation
- Write new files (specs, reports, tables)
- Execute multi-step web research with source citations
- Cross-reference documents you provide against each other
- Generate and iterate on text, tables, and structured data
Cowork CANNOT
- Access your Figma files (unless you export and upload screenshots/SVGs)
- Read your company Confluence, SharePoint, or internal wikis
- Open proprietary DBC/CAN database files (unless you paste relevant sections)
- Access your email, Slack, or internal tools
- Remember context from previous conversations (unless you re-provide it)
- Verify information against paywalled standards documents it can't access
The gap between "can" and "cannot" is bridged by you uploading or pasting the relevant data. Cowork becomes dramatically more useful when you provide it with: exported Figma frames as PNGs, relevant CAN signal excerpts from your DBC file, specific ISO standard sections (copy-paste the relevant clauses), and your internal design system documentation. Think of Cowork as a powerful analyst who just started at your company — brilliant but needs onboarding materials.
The Agent Loop
When Cowork operates autonomously, it follows a structured loop:
Seven Cowork Use Cases for FUSO HMI
Competitor Cluster Research
"Browse automotive press sites and create a comparison matrix of 2024-2025 heavy-duty truck digital clusters from Actros, Volvo FH, Scania S, and DAF XG+. Include: display size, resolution, information layout approach, ADAS visualization method, and night mode implementation."
ISO Standard Cross-Reference
"Read my DPF warning specification file and cross-reference every element against ISO 2575:2021 Annex A and ISO 15005:2017 clause 5.2. Flag violations and suggest corrections. Save results to a new file." (requires: upload your spec file AND paste the relevant ISO clauses)
Design System Audit
"Review all telltale specifications in our design system folder. Check for: color consistency with ISO 2575, animation rate consistency across warning levels, text length compliance, and icon uniqueness. Generate an audit report." (requires: upload all telltale spec files to the conversation)
CAN Signal Mapping
"Parse the attached DBC file. Extract all signals relevant to the instrument cluster. For each signal, determine: display zone, update rate, value range, and what the driver should see. Format as a signal-to-display mapping table." (requires: upload or paste your DBC file)
Multi-Language Alert Validation
"Take our warning text specifications and verify the Japanese translations fit within the character limit for each display zone. Check that no Japanese string exceeds the allocated pixel width at 24px font size. Flag any that overflow."
State Machine Completeness Check
"Review the ADAS state machine I've defined. Verify that every state has a defined entry condition, exit condition, display specification, and fallback behavior. Identify any unreachable states or missing transitions. Generate a DOT graph of the complete state machine."
ARCHION Transition Research
"Research the ARCHION Corporation formation timeline (FUSO + Hino, Daimler Truck + Toyota). Find published information about shared platform implications for instrument cluster design. Focus on whether existing FUSO or Hino HMI design systems will be unified."
Focus 80% of your LLM usage on the 20% of tasks that consume the most time: specification writing, compliance checking, research aggregation, and state machine validation. Don't use Cowork for tasks that take 2 minutes by hand — the overhead of writing a good prompt exceeds the time saved.
Delegate a research task that would take you 30+ minutes manually.
What to check: Verify at least 3 specific claims against manufacturer press materials or automotive press reviews. Cowork research is a starting point — not a finished deliverable.
Chapter 10Verification & Safety Protocols
Every LLM output in your HMI workflow must be verified before it enters a specification, gets committed to a design file, or influences a safety-relevant decision. This isn't optional — it's the difference between using LLMs productively and introducing silent errors into safety-critical vehicle systems.
Why Verification Matters More Here
Unlike web design or marketing copy, errors in truck instrument cluster specifications can have physical safety consequences. A wrong color mapping for a brake warning. An incorrect priority ranking that suppresses a critical alert. A telltale timing that violates ASIL B requirements. These aren't cosmetic bugs — they affect driver safety.
Low Stakes
Marketing copy, brainstorming, competitor research summaries, internal documentation formatting
Verification: Quick sanity check
Medium Stakes
Layout proposals, icon design suggestions, CAN signal mapping drafts, non-safety UI text
Verification: Cross-reference against design system + peer review
High Stakes
Telltale color/priority specs, ASIL B timing values, ADAS warning escalation logic, regulatory compliance claims
Verification: Manual expert verification against primary sources + team review + testing
The Three Verification Traps
1. Sycophancy — The Model Agrees With You
LLMs have a strong tendency to agree with whatever premise you present. If you say "The DPF warning should be green, right?" the model is likely to say "Yes, that makes sense" — even though ISO 2575 mandates amber for caution states. This is sycophancy, and it's one of the most dangerous failure modes in safety-critical work.
How to Counter Sycophancy
- Ask neutral questions: "What color should the DPF warning be?" — not "The DPF warning should be green, right?"
- Invite disagreement: Add "Challenge any assumptions in my question that may be incorrect" to your prompt
- Test with wrong premises: Deliberately include an error and see if the model catches it. If it doesn't, your verification process needs to be stronger
- Use the Devil's Advocate technique from Chapter 08 after every critical recommendation
2. Automation Bias — Trusting Because It's AI
When output is well-formatted and confidently stated, humans tend to accept it without scrutiny. A table with perfect markdown formatting, clear headers, and professional language looks authoritative — even if the content contains errors. The more polished the output, the more carefully you need to verify the substance.
How to Counter Automation Bias
- Verify the hardest items first: Start with the cells in the table that require the most domain knowledge — those are where errors hide
- Check specific numbers: Timing values, character counts, pixel dimensions — verify these against primary sources, not the model's output
- Separate format from content: A beautifully formatted wrong answer is still wrong
3. Consistency Illusion — Same Input, Different Output
The same prompt can produce different results each time due to randomness in token sampling. You might run a compliance check today and get "all clear," then run the identical check tomorrow and get three violations flagged. For critical specifications:
- Run the same verification prompt at least 3 times
- Set temperature to 0 for maximum consistency
- Treat any inconsistency across runs as a flag requiring human expert review
The Verification Checklist
Apply this checklist to every LLM output that will enter a specification document:
| Check | Question | Red Flag |
|---|---|---|
| Source | Can I trace this claim to a primary source (ISO doc, DBC file, MFTBC spec)? | Model cites a standard clause that doesn't exist |
| Color | Do all telltale colors match ISO 2575 mandates? | Green used for a caution state, amber for informational |
| Timing | Do all latency values meet ASIL B requirements? | Render time > 200ms, unmotivated response > 100ms |
| Glance | Can each display element be read in ≤ 1.5 seconds? | Text strings exceeding 3 lines or 24+ characters with complex terminology |
| Priority | Is the warning priority ranking consistent with safety criticality? | Convenience alerts ranked above safety warnings |
| RHD | Are zone placements correct for right-hand drive? | Critical info placed assuming left-hand drive |
| CAN | Are signal names marked as illustrative/unverified? | Model presents invented signal names as factual |
| Completeness | Are all states covered, including edge cases and failures? | Missing "signal lost" or "sensor fault" states |
No LLM-generated specification for safety-critical display elements (ASIL B telltales, ADAS warnings, brake system indicators) should ever be used without manual verification against the primary standard documents and internal MFTBC specifications. The LLM accelerates your work — it does not replace your engineering judgment.
Stress-test the verification checklist on a real LLM output.
What to check: This is a meta-exercise — you're verifying your own verification process. If most items are "unable to verify," you need better access to primary source documents.
Chapter 11ARCHION, Coretura & SDV-Era HMI
The Japanese heavy-duty truck industry is undergoing its largest structural transformation in decades. Two major partnerships will reshape the competitive and technical landscape that your HMI designs must navigate.
ARCHION Corporation (April 2026)
ARCHION Corporation is a holding company that brings Mitsubishi Fuso (Daimler Truck) and Hino Motors (Toyota Motor) under a single corporate umbrella. FUSO and Hino continue to operate as separate brands — this is not a brand merger, but a structural unification creating Japan's largest commercial vehicle group. For HMI designers, the implications are significant: potential design system convergence between FUSO and Hino product lines, shared ADAS platforms requiring consistent warning interfaces, and a combined product portfolio spanning light-duty (eCanter, Dutro) through heavy-duty (Super Great, Profia).
Coretura AB
Coretura AB is a separate joint venture between Daimler Truck and Volvo Group focused on developing a shared Software-Defined Vehicle (SDV) platform. This is purely a software collaboration — the companies remain competitors in vehicle sales. The SDV platform will provide the middleware and software architecture that future truck instrument clusters run on.
What SDV Means for HMI Design
Software-Defined Vehicles separate hardware from software update cycles. For instrument cluster designers, this means:
- Over-the-air updates: HMI changes can ship after vehicle delivery. Your design must handle versioning — different trucks in the same fleet may display different UI versions
- Shared rendering platform: Coretura's middleware may standardize the graphics pipeline across Daimler and Volvo products. Your FUSO-specific design system will need to work within this shared framework
- API-driven displays: Instead of hardcoded screen layouts, clusters may consume display data from a standardized API — making your design specifications more like design tokens and less like pixel-perfect mockups
- Longer design lifecycles: When software updates can change the UI independently of hardware, your initial design must accommodate years of iterative updates while maintaining visual and functional consistency
Hydrogen & Next-Gen Powertrains
The Hino Profia Z FCV (hydrogen fuel cell) joins the ARCHION product family, adding a third powertrain type to your HMI scope. Hydrogen-specific displays require: tank pressure per bank (6x 70MPa), fuel cell stack temperature, hydrogen leak warnings (safety-critical — immediate action required), and dual-source range estimation. Your design system needs to be modular enough to accommodate this without a ground-up redesign.
Use LLMs to stay current on ARCHION and Coretura developments. Set up periodic Cowork research tasks to scan industry press, Daimler Truck investor presentations, and Toyota corporate announcements for HMI-relevant updates. The model can synthesize information from multiple sources faster than manual monitoring — but always verify key claims against primary corporate communications.
Use an LLM to research a fast-moving industry development.
What to check: Cross-reference against Daimler Truck and Toyota investor relations pages. Industry mergers generate speculation — only trust claims linked to official corporate communications.
Chapter 12Expert Workflow & Checklist
You've now covered the complete foundation: the vehicle platforms, the regulations, how LLMs work, how to prompt them effectively, how to use advanced techniques, how to verify their output, and how the industry is evolving. This final chapter puts it all together into a practical daily workflow.
Start Small, Then Compound
Don't try to revolutionize your entire workflow on day one. Start with one use case — perhaps using an LLM to draft alert text for a single warning type. Master that. Then expand to specification tables. Then to compliance checking. Each successful use case builds your prompt engineering intuition and your confidence in verification. This is compounding — small, consistent improvements that accumulate into transformative workflow change.
The Daily Workflow
Common Mistakes to Avoid
Trusting Without Verifying
The number one mistake. LLM output looks authoritative even when wrong. Always verify safety-critical specifications against primary sources.
Under-Specifying Prompts
"Design a warning" will produce generic output. "Design a DPF regen warning for the Super Great, amber per ISO 2575, RHD Japan market" produces usable output.
Over-Prompting Simple Tasks
Not every task needs a 500-word prompt with few-shot examples and chain-of-thought. Match your prompt complexity to the task complexity.
Ignoring Context Drift
Long conversations degrade quality. Break sessions by topic. Re-state critical constraints periodically. Don't assume the model remembers your requirements from 30 messages ago.
Using Wrong Model Tier
Flagship models for simple formatting tasks waste time and money. Small models for complex compliance analysis miss critical nuances. Match the model to the task.
Not Saving Successful Prompts
When a prompt produces excellent output, save it as a template. Building a personal prompt library compounds your efficiency over time.
Your Expert Checklist
Pin this to your workstation. Before sending any LLM-generated specification forward:
| ☐ | Context set: Vehicle model, powertrain, market, and standards specified in prompt |
| ☐ | ISO 2575 colors verified: Red = danger, amber = caution, green = normal — no exceptions |
| ☐ | ASIL B timing confirmed: ≤ 200ms telltale display, ≤ 100ms unmotivated response |
| ☐ | Glance budget respected: All elements readable within 1.5-second single glance (ISO 15005) |
| ☐ | RHD zones correct: Zone placement verified for right-hand drive Japan market |
| ☐ | CAN signals marked: All signal names flagged as illustrative/unverified unless from actual DBC |
| ☐ | Edge cases covered: Signal lost, sensor fault, simultaneous warnings, CAN bus failure states defined |
| ☐ | Hallucination check: No display specs, CAN signal names, or standard clauses taken at face value |
| ☐ | Self-consistency passed: Critical specifications verified by running prompt 3 times and comparing |
| ☐ | Peer reviewed: Another team member has reviewed the LLM-generated specification |
Quick-Reference: Before & After Every Prompt
Before You Prompt
- What exactly do I need? (Be specific in your own mind first)
- What context does the model need that it doesn't have?
- What format do I want the output in?
- What's the quality bar? (First draft vs. final spec)
- How will I verify the output?
After You Get Output
- Does this match what I actually needed?
- Are the key facts verifiable against primary sources?
- Is the reasoning sound? (Follow the logic)
- What's missing that I should add?
- If iterating: what specific feedback should I give?
Measuring Your Impact
Track these metrics to quantify how LLMs improve your workflow:
- Specification draft time: How long from blank page to first complete draft? (Target: 50-70% reduction)
- Compliance review coverage: How many standards clauses checked per review? (Target: 2-3x increase)
- Iteration cycles: How many prompt-adjust-verify loops to reach final spec? (Decreases as prompt skills improve)
- Error rate: Errors caught in verification per specification. (Should decrease as you build better prompts and system contexts)
Build your personal system prompt.
What to check: After a week of using your system prompt, note which instructions the LLM follows consistently and which it ignores. Refine accordingly.
Glossary
Terms used throughout this reference. Sorted by category.
| Vehicle & Powertrain | |
| ADAS | Advanced Driver Assistance Systems — automated safety features like emergency braking, blind spot detection, lane departure warnings |
| AMT | Automated Manual Transmission — clutchless manual gearbox (FUSO uses "SHIFTPILOT" brand name) |
| CAN / CAN FD | Controller Area Network — the communication bus connecting ECUs in the vehicle. CAN FD is the faster variant (up to 2 Mbps data phase) |
| DBC | Database Container — the file format that defines CAN signal names, IDs, byte positions, and scaling factors. Proprietary per manufacturer |
| DPF | Diesel Particulate Filter — traps soot from diesel exhaust; requires periodic "regeneration" (burning off accumulated soot) |
| ECU | Electronic Control Unit — any embedded computer module in the vehicle (engine ECU, ADAS ECU, body ECU, etc.) |
| J1939 | SAE standard protocol for CAN communication in heavy-duty vehicles. Defines message IDs and data formats |
| RHD / LHD | Right-Hand Drive / Left-Hand Drive — refers to driver position. Japan market = RHD |
| SCR | Selective Catalytic Reduction — exhaust aftertreatment using AdBlue (urea) to reduce NOx emissions |
| SOC | State of Charge — battery level as a percentage (eCanter) |
| V2X | Vehicle-to-Everything communication — wireless data exchange between vehicle and infrastructure, other vehicles, or grid |
| Safety & Standards | |
| ASIL | Automotive Safety Integrity Level — ISO 26262 risk classification. ASIL A (lowest) to ASIL D (highest). Instrument clusters are typically ASIL B |
| ISO 2575 | International standard defining telltale symbols and mandated colors for vehicle displays |
| ISO 15005 | Standard for in-vehicle dialogue management — establishes principles for driver-display interaction |
| ISO 16673 | Occlusion method standard — source of the 1.5-second maximum single-glance time measurement |
| ISO 26262 | Functional safety standard for road vehicles — defines ASIL levels and safety requirements |
| UNECE R121 | UN regulation specifying required telltale symbols and identification rules |
| Telltale | An indicator light/icon on the instrument cluster that communicates a vehicle status or warning to the driver |
| LLM & AI | |
| Context window | The maximum amount of text (measured in tokens) an LLM can process in a single conversation |
| Few-shot | Providing the LLM with 1-3 examples of the desired output format before your actual request |
| Hallucination | When an LLM generates plausible-sounding but factually incorrect information |
| RLHF | Reinforcement Learning from Human Feedback — a training technique that aligns model outputs with human preferences |
| Sycophancy | The tendency of LLMs to agree with the user's stated position even when it's incorrect |
| Temperature | A parameter controlling output randomness. Low (0-0.2) = deterministic; High (0.7-1.0) = creative |
| Token | The sub-word unit LLMs process. ~1 token ≈ 0.75 English words. "eCanter" might be 3 tokens |
| Zero-shot | Asking the LLM to perform a task with no examples — just a clear instruction |
Appendix: Prompt Library
Every reusable prompt template from this reference, collected for quick access. Replace [BRACKETED TERMS] with your project specifics.