Expert Reference — Mitsubishi Fuso

LLM-Powered HMI Design for FUSO Heavy-Duty Trucks

A complete reference for UI/UX designers working on digital instrument clusters for the Mitsubishi Fuso Super Great, eCanter, and the incoming ARCHION product family. From zero LLM knowledge to expert-level workflow integration.

~31 min read12 chaptersMarch 2026

Chapter 01The FUSO Product & Platform Landscape (2026)

Mitsubishi Fuso Truck and Bus Corporation (MFTBC), a subsidiary of Daimler Truck AG, manufactures commercial vehicles across light-, medium-, and heavy-duty segments primarily for the Japanese and Asian markets. As of March 2026, the HMI designer's product scope spans four distinct vehicle lines with significantly different powertrain, display, and interaction requirements.

FUSO Design System Super Great Heavy · Diesel · 6R30 Fighter Medium · Diesel Canter Light · Diesel/Hybrid eCanter Light · All-Electric
The FUSO product family spans diesel, hybrid, and fully electric powertrains — each requiring distinct HMI states and display strategies.

Super Great (Heavy-Duty)

The flagship. Powered by the 6R30 12.8L inline-6 diesel with SHIFTPILOT 12-speed AMT. Comprehensive ADAS suite: ABA6 (Active Brake Assist 6), Active Sideguard Assist 2.0, FBIS (Front Blindspot Information System), Active Attention Assist, and LDW. Exhaust aftertreatment: BlueTec SCR + DPF. Telematics: Truckonnect. Right-hand drive for Japan market — critical for zone placement analysis.

eCanter (Light-Duty Electric)

All-electric with modular battery packs (S/M/L configurations). The cluster must handle unique EV states: state-of-charge (SOC), estimated range (eRange), recuperation levels, charging status (AC L1/L2/DC), battery temperature warnings, cell balancing indicators, and V2X readiness. No tachometer — the power/regen gauge replaces it.

Fighter & Canter

Medium- and light-duty diesel platforms. Fighter shares many ADAS systems with Super Great at a lower tier. Canter includes diesel and mild-hybrid variants. Both use simpler cluster configurations but share the core design system with the heavy-duty line.

Hallucination Risk: Display Resolution

Some LLM outputs may state the Super Great display is "12.3-inch 1920×720." MFTBC does not publish detailed display specifications publicly. No Japanese heavy-duty OEM (FUSO, Hino, Isuzu, UD Trucks) makes this data available in press materials. Treat all LLM-generated display specs as unverified until confirmed against internal documentation.

Try This Now

Test whether your LLM knows the basics of the FUSO product line.

Copy → Paste → Replace Brackets What are the current Mitsubishi Fuso truck models in production as of [CURRENT YEAR]? For each, list: vehicle class (light/medium/heavy), powertrain type, and primary market. Flag anything you're uncertain about.

What to check: Compare the output against this chapter. Note which facts the LLM gets right, which it hedges on, and which it gets wrong — this calibrates your trust level.

Chapter 02Regulations, Safety & Display Architecture

Your instrument cluster design must comply with multiple overlapping regulatory layers. Understanding which are mandatory versus advisory is critical for prioritizing design decisions.

Mandatory UNECE R121 · MLIT · FMVSS Standards ISO 15005 · 26262 · 2575 Advisory / Rating Euro NCAP 2026 Your Instrument Cluster Design
All three regulatory layers converge on your cluster design decisions.
StandardScopeKey HMI Requirement
ISO 15005:2017Dialogue principlesMinimize visual demand; 1.5s single-glance budget (per ISO 16673 occlusion method)
ISO 26262:2018Functional safetyASIL B: 200ms telltale display latency, 100ms unmotivated telltale response
ISO 2575:2021Symbols & colorsStandardized telltale symbols, mandated colors (red/amber/green)
ISO 15008:2017Character heightMinimum character height and contrast ratios at viewing distance
UNECE R121Telltale identificationMandatory telltale symbols and placement rules
Euro NCAP 2026Safety ratingPhysical controls mandate — turn signals, hazard lights, horn, wipers, SOS must not be display-only

Functional Safety — ASIL B Pipeline

The instrument cluster is classified as ASIL B under ISO 26262. This classification defines hard technical constraints on your display pipeline.

CAN Message CRC + Counter Integrity Check Pass / Fail Render ≤ 200 ms Readback Frame verify Display Driver sees SAFE STATE: Screen off (kill pin) — within 100–200 ms
ASIL B safety pipeline: from CAN message receipt to driver perception. Failure at any stage triggers safe state (screen off via hardware kill pin).

Glance-Time Budget

1.5s SAFE 2.0s Euro NCAP limit DANGER: No forward awareness 0 m ~35 m @ 85 km/h Increasing distance blind
At 85 km/h, a 1.5-second glance covers ~35 m of blind travel. Every display element must be readable within this budget.

SAE J941's Class B (heavy truck) eye ellipses have not been updated since 2010 and don't account for height-adjustable seats. Treat J941-based cluster placement as approximate.

Display Hardware & CAN Architecture

The instrument cluster receives data from dozens of ECUs via the vehicle's CAN bus network. Understanding this pipeline is essential — it determines what data you can display, how fast it arrives, and what safety constraints apply.

Engine ECU 6R30 / BMS ADAS ECU ABA6 / ASA 2.0 Body ECU Lights / Doors CAN / CAN FD J1939 250 kbps / FD up to 2 Mbps Instrument Cluster 200 ms display budget CRC + counter check
CAN bus pipeline from ECU to cluster display. J1939 protocol, CRC-protected messages, 200ms maximum render latency for ASIL B telltales.
CAN Signal Names in LLM Output

All CAN signal names generated by LLMs (e.g., BMS_StateOfCharge, DPF_RegenStatus) are illustrative placeholders. Real signal names come from the vehicle's DBC file, which is proprietary. Always verify against your actual DBC before using in specifications.

Try This Now

Have the LLM draft a regulatory compliance matrix for a display element you're currently working on.

Copy → Paste → Replace Brackets I'm designing [YOUR DISPLAY ELEMENT] for the [VEHICLE MODEL] instrument cluster. Create a compliance checklist covering: ISO 15005 (glance time), ISO 26262 ASIL B (latency), ISO 2575 (colors/symbols), and UNECE R121 (telltale requirements). For each standard, state the specific requirement and whether my element is likely affected. Flag any requirements you cannot verify.

What to check: Cross-reference every ISO clause number the LLM cites against the actual standard documents. Fabricated clause numbers are a common hallucination.

Sources: ISO 15005 (catalog page — full text is paywalled) · ISO 26262 (catalog page — full text is paywalled) · Euro NCAP

Chapter 03ADAS, Powertrain & Telltale Design

The 2024+ Super Great carries one of the most comprehensive ADAS suites in the Japanese heavy-duty segment. The cluster must handle warning escalation across multiple systems while maintaining clear visual hierarchy.

ABA6 Warning Escalation

Standby Static · Green Warning Pulse · Amber Brake Rec. Rapid · Amber→Red E-Brake Full takeover · Red Fault Amber
ABA6 warning escalation: each stage increases visual urgency through color, animation rate, and screen real estate.

Active Sideguard Assist 2.0 monitors blind spots on both sides during turns and lane changes. Three-stage escalation at ≤20 km/h: visual alert → audible alarm + A-pillar lamp → damage mitigation brake. When multiple ADAS systems trigger simultaneously, prioritize by criticality — emergency interventions (ABA full brake, ASA emergency stop) take absolute priority.

Cross-Powertrain HMI Architecture

Shared Zone: Speed · ADAS Telltales · Warnings Diesel Module Tachometer · Fuel DPF Status · AdBlue Coolant · Turbo Boost EV Module SOC · eRange Recuperation · Charging Battery Temp · V2X Hydrogen Module H2 Pressure · Range FC Stack Status Refuel Progress
Modular cross-powertrain architecture: shared display zone for speed, ADAS, and warnings, with swappable powertrain-specific modules.

Diesel (Super Great, Fighter, Canter): DPF regeneration has 4 states: inactive, passive regen, active regen request (pull over), DPF full (derate imminent). AdBlue uses 3-level depletion warning: advisory (>500km), warning (100-500km), critical (<100km — torque derestriction). Both follow ISO 2575 color conventions.

EV (eCanter): The cluster replaces the tachometer with a power/recuperation gauge. Key unique states: SOC with range estimation, charge port status with time-to-full, cell balancing indicator, thermal management warnings, V2X pre-conditioning.

Hydrogen (Hino Profia Z FCV — ARCHION context): Uses 6×70MPa tanks with Toyota MIRAI-derived fuel cell stacks. HMI must display: tank pressure per bank, FC stack temperature, hydrogen leak warning (safety-critical), dual-source range estimation.

Competitive Landscape

Japanese heavy-duty OEM competitors: Hino Profia (now within ARCHION), Isuzu Giga, UD Trucks Quon (Isuzu Motors). European benchmarks: Mercedes-Benz Actros, Volvo FH, Scania S-Series, DAF XG+. None publish detailed display specifications publicly — making LLM-assisted image analysis particularly valuable for building competitor benchmark libraries.

Competitor Benchmark Matrix (Template)

Use an LLM with competitor press photos (see Screenshot Workflows in Chapter 05) to fill this matrix. The empty cells are your research task.

FeatureFUSO Super GreatHino ProfiaIsuzu GigaActrosVolvo FH
Display type
ADAS visualization
Warning escalation style
Information hierarchy
Night mode approach
EV-specific UI (if applicable)
Try This Now

Use image search or manufacturer press kits to find cluster photos for 2-3 competitors. Paste them into your LLM with this prompt:

Competitor Benchmark Prompt I've attached instrument cluster images from [LIST COMPETITOR MODELS]. Fill in this comparison matrix based on what you can observe. For each cell, describe what you see. If something isn't visible in the image, mark it "Not visible." Do not guess. Features to compare: display type (analog/digital/hybrid), ADAS visualization approach, warning escalation style, information hierarchy, night mode approach, EV-specific UI elements. Format as a markdown table.

What to check: Verify at least 3 specific claims against manufacturer press materials or automotive press reviews. LLM image analysis is a starting point — not a finished deliverable.

Try This Now

Generate a warning escalation specification for an ADAS system.

Copy → Paste → Replace Brackets Design a [NUMBER]-stage warning escalation for [ADAS SYSTEM NAME] on the Fuso Super Great cluster. For each stage, define: trigger condition, telltale color (per ISO 2575), animation behavior (static/pulse rate/flash rate), text string (max 24 characters), audio alert type, and what causes escalation to the next stage. Format as a markdown table.

What to check: Verify the color assignments match ISO 2575 conventions (green=normal, amber=caution, red=danger). Check that text strings are actually under 24 characters — count them.

Chapter 04What LLMs Are & How They Work

A Large Language Model (LLM) is a type of AI trained on enormous amounts of text to learn patterns of human language. Given input text, it predicts what should come next. Scale this to billions of parameters and trillions of training words, and next-word prediction becomes surprisingly capable: writing prose, generating code, analyzing documents, and reasoning through multi-step problems.

Think of it as a brilliant, well-read colleague who has never worked at your company, doesn't know your specific situation, and occasionally makes things up with complete confidence. The more context you give, the better the output.

Key Limitation

An LLM has no access to your internal FUSO design system, your Figma files, or your company's proprietary CAN database — unless you provide that information in your prompt. The model's knowledge has a training cutoff date. It may know about ISO 26262:2018 but it has never seen MFTBC's internal specifications.

How We Got Here

2017
The Transformer
Google's "Attention Is All You Need" paper — the architecture behind every major LLM today. All words processed in parallel via attention.
2018–2020
GPT-3 (175B Parameters)
Emergent abilities appeared — performing tasks from just a few examples in the prompt, no retraining needed.
Nov 2022
ChatGPT
100 million users in two months. Made LLMs accessible to non-technical users for the first time.
2023–2024
Multimodal & Reasoning
GPT-4 added image understanding. Claude introduced 200K-token contexts. Reasoning models (o1) scored 83% on AIME 2024 vs 12% for standard models.
2025–2026
AI Agents
AI shifted from chatting to acting — autonomous task execution (Claude Cowork, Claude Code). Models that plan, use tools, and iterate independently.

The Context Window

LLMs process text as tokens — sub-word pieces (~1 token ≈ 0.75 words). Everything you send — your prompt, pasted specs, and the model's response — must fit inside the model's context window. A full DBC file might be 50,000+ tokens. A single ISO standard section might be 10,000 tokens. Current windows range from 4K tokens (GPT-3.5) to 200K tokens (Claude).

Temperature: Controlling Creativity

Temperature controls randomness. Low values produce deterministic output; high values introduce variety. Most models default to ~0.7.

TaskTemperatureWhy
Spec generation, state matrices0 – 0.2Deterministic, consistent output
ISO compliance checks0No creativity — factual matching
CAN signal extraction, DBC parsing0Precise data extraction, no embellishment
Alert text copywriting0.3 – 0.5Some variety, still controlled
Brainstorming layout alternatives0.7 – 0.9Explore diverse approaches
Icon metaphor exploration, naming0.8 – 1.0Maximum creative range

Fundamental Limitations

LimitationWhat It Means For Your WorkHow To Mitigate
HallucinationGenerates plausible but false info — e.g., fabricated ISO clause numbers, invented CAN signal namesVerify every safety-critical claim against primary sources
Knowledge cutoffDoesn't know events after its training date — may miss latest ARCHION developmentsProvide current information in your prompt or use web-search-enabled models
No persistent memoryEach conversation starts blank — your design system context is lost between sessionsUse system prompts and save/reload key context (see Ch08 WSCI framework)
SycophancyAgrees with you even when you're wrong — dangerous for safety-critical specsAsk neutral questions; explicitly invite disagreement
InconsistencySame prompt, different results each time — problematic for reproducible specsSet temperature to 0; run critical prompts 3x and compare
Math errorsCan miscalculate pixel dimensions, character counts, or timing valuesAsk the model to write code for calculations rather than doing mental math
▸ Under the Hood: How LLMs Actually Work (Optional Deep Dive)

Tokenization

The model breaks your text into tokens — sub-word pieces. "instrument cluster" becomes ["instrument", " cluster"]. "eCanter" might become ["e", "C", "anter"]. Each token gets a numeric ID.

TOKENIZATION — HMI CONTEXT "Super Great battery charge telltale" Super Great battery charge tell tale 5765 8120 11402 5894 2025 8102
"telltale" gets split into two tokens. Rare or compound words are broken into sub-word pieces. Numbers below are illustrative token IDs.

Attention — How LLMs Understand Context

For each token, the model computes how much to "focus on" every other token. When processing "The brake warning activated because it sensed danger," attention figures out "it" refers to the brake warning — not the danger. It matches Queries against Keys to produce scores, then gathers information from Values. Think of it like a library: the Query is your search question, the Key is each book's spine label, and the Value is the content inside.

ATTENTION SCORES — Resolving "it" Input: The brake warning activated because it sensed danger SCORES The brake warning activated because sensed danger RESULT "it" resolves to "brake warning" — the system that sensed danger Modern models run 12–96 attention heads in parallel, each tracking different relationships.

The Training Pipeline

THREE-STAGE TRAINING 1. PRE-TRAINING Trillions of tokens from the web "Predict the next word" Months · $Millions · → Raw knowledge 2. SUPERVISED FINE-TUNING Curated Q&A examples "Here's ideal assistant behavior" Days · Supervised · → Follows instructions 3. REINFORCEMENT LEARNING Human rankings (RLHF) or verifiable rewards — math, code (RLVR) Weeks · RL · → Aligned, can reason Pre-training is the heavy lift. SFT and RL are refinements layered on top.
Try This Now

Test the practical limits described in this chapter with a real task.

Copy → Paste → Replace Brackets Explain the difference between the [FUSO TERM 1] and [FUSO TERM 2] systems on the Mitsubishi Fuso Super Great. Then tell me: how confident are you in this answer, and what's your source? If you're uncertain, say so explicitly rather than guessing.

What to check: Did the model express appropriate uncertainty, or did it state everything with full confidence? This tests the hallucination and sycophancy risks from this chapter.

Chapter 05Models, Tiers & Capabilities

Not all LLMs are equal. Models exist on a spectrum of capability, speed, and cost — and choosing the right tier for the right task is a core skill. Using a flagship model for every quick question wastes time and money; using a small model for complex specification work produces unreliable results.

The Three Tiers

TierExamplesStrengthsBest FUSO Use Cases
FlagshipClaude Opus, GPT-4o, Gemini UltraDeep reasoning, nuance, long context, complex multi-step tasksISO compliance analysis, cross-system state matrix generation, HMI architecture planning, spec review with edge cases
Mid-tierClaude Sonnet, GPT-4o-mini, Gemini FlashGood reasoning at faster speed, strong instruction followingPrompt iteration, alert copywriting, competitor analysis summaries, documentation drafts
Small / FastClaude Haiku, GPT-3.5Very fast, low cost, good for simple structured tasksCAN signal name formatting, quick translations, boilerplate text, simple lookups
Rule of Thumb

Start with the smallest model you think might work. If the output quality isn't sufficient, move up one tier. This is faster and cheaper than always defaulting to flagship. Most daily tasks — copywriting alert messages, reformatting tables, generating boilerplate — work perfectly on mid-tier models.

Multimodal Capabilities

Modern flagship and mid-tier models can see images. This unlocks powerful HMI workflows:

  • Screenshot analysis: Paste a cluster screenshot and ask "Does this layout satisfy ISO 15005 glance-time constraints?"
  • Competitor benchmarking: Upload photos of Actros, Volvo FH, and Scania clusters and ask for a structured comparison of information hierarchy
  • Icon evaluation: Share a telltale icon set and ask "Which symbols might be confused at 1.5-second glance time?"
  • Handwritten sketches: Photograph a whiteboard sketch and ask the model to convert it into a structured layout specification

Screenshot & Image Workflows

For a UI/UX designer, pasting screenshots directly into the LLM is the single highest-ROI multimodal workflow. You can get design feedback, compliance checks, and competitor analysis from images alone — no specification documents needed.

1
Figma Screenshot → Layout Feedback

Take a screenshot of your current cluster layout in Figma. Paste it into Claude or ChatGPT with this prompt:

Layout Review Prompt I've attached a screenshot of my instrument cluster layout for the [VEHICLE MODEL]. This is a [RIGHT/LEFT]-hand drive vehicle for the [MARKET] market. Review this layout for: (1) ISO 15005 glance-time compliance — can each information group be read in ≤1.5 seconds? (2) Information hierarchy — is the most critical data (speed, active warnings) the most visually prominent? (3) Zone balance — is the layout optimized for the driver's primary viewing angle? Give specific, actionable feedback on what to change and why.
2
Press Photos → Structured Comparison

Download press photos of competitor clusters (Actros, Volvo FH, etc. from manufacturer press kits). Paste 2-3 images into a single prompt:

Photo Comparison Prompt I've attached [NUMBER] instrument cluster photos from these trucks: [LIST MODELS]. Create a structured comparison table with columns: Feature, [Model 1] Approach, [Model 2] Approach, [Model 3] Approach, Best Practice Winner. Compare: information hierarchy, color usage, ADAS visualization method, typography/readability approach, and night mode design (if visible).
3
Telltale Icon Set → Legibility Assessment

Export your telltale icon set from Figma as a single image. Paste and ask:

Icon Legibility Prompt These are telltale icons for a truck instrument cluster viewed at ~800mm distance. At 1.5-second glance time: (1) Which icons could be confused with each other? (2) Which icons lack sufficient contrast for night driving? (3) Which icons might not be recognized by a driver unfamiliar with the specific system? For each issue, suggest a specific fix.
Limitations of Image Analysis

LLMs cannot measure actual pixel dimensions, calculate true contrast ratios, or simulate peripheral vision. Image-based feedback is qualitative, not quantitative. Use it for early-stage review and brainstorming, not as a substitute for proper usability testing or instrument-grade photometric analysis.

Model-Specific Behavior

Different model families have distinct personalities. Claude tends toward careful, nuanced responses and will express uncertainty. GPT models tend toward confident, comprehensive outputs. Neither approach is inherently better — but awareness helps you calibrate your verification effort.

Claude (Anthropic)

  • Tends to flag uncertainty rather than guess
  • Strong at following complex multi-part instructions
  • Will push back if asked to do something it considers harmful
  • Very large context windows (200K tokens)
  • Cowork mode for autonomous file/browser tasks

GPT-4 / ChatGPT (OpenAI)

  • Strong code generation and structured output
  • More likely to produce confident answers even when uncertain
  • Good at creative tasks and brainstorming
  • Extensive plugin/tool ecosystem
  • Image generation via DALL-E integration

Context Window Sizes

The context window determines how much information you can include in a single conversation. For HMI specification work, this matters enormously — a full CAN DBC file, multiple ISO standard excerpts, and your design system documentation can easily exceed 100K tokens.

CONTEXT WINDOW COMPARISON GPT-3.5: 4K tokens (~3 pages) GPT-4o: 128K tokens (~100 pages) Claude Opus/Sonnet: 200K tokens (~150 pages)
Larger windows let you include more reference material, but the model may lose focus on details buried in the middle of very long contexts.
Bigger Isn't Always Better

Just because a model can accept 200K tokens doesn't mean you should always fill that window. Research shows models perform best when key information appears near the beginning or end of the context. Critical requirements buried in the middle of a massive prompt may be overlooked. Be strategic about what you include.

Try This Now

Compare model tiers on the same task to build intuition for when to use which.

Copy → Paste → Replace Brackets Create a state transition matrix for the [SYSTEM NAME] with these states: [LIST YOUR STATES]. Include: trigger condition for each transition, display changes, priority level, and fallback if the CAN signal is lost.

What to check: Run this on a small model (Haiku/GPT-3.5) and a flagship model (Opus/GPT-4). Compare: did the small model miss edge cases? Did it handle the signal-lost fallback? This calibrates your model selection.

Chapter 06Prompt Engineering for FUSO HMI

Prompt engineering is the skill of communicating with LLMs effectively. A well-structured prompt is the difference between a vague, generic response and a precise, actionable output tailored to your specific FUSO HMI design task. This is the single most important skill to develop.

The Five Foundational Principles

Principle 1
Specificity Compounding
Every specific detail you add compounds quality. "Design a warning icon" produces generic clip art. "Design a DPF regeneration warning icon for the Super Great cluster, amber per ISO 2575, minimum 7mm height at 800mm viewing distance, must be distinguishable from the AdBlue warning in peripheral vision at highway speed" produces something you can actually use. Each constraint narrows the output space toward your real requirements.
Principle 2
Positive Framing — Tell What To Do, Not What To Avoid
Instead of "Don't make it too complex" or "Avoid cluttering the display," say "Use a maximum of 4 information elements in the primary zone" or "Prioritize the three highest-severity telltales." Negative instructions leave the model guessing what you do want. Positive framing gives it a clear target. One specific "do" instruction is worth five "don't" instructions.
Principle 3
Context Is King — The Goldilocks Principle
Too little context forces the model to guess — and it will fill gaps with generic assumptions. Too much context buries your actual question in noise and the model loses focus. The sweet spot: include everything the model needs to answer correctly, and nothing it doesn't. For FUSO HMI work, this usually means: vehicle model, powertrain type, applicable standards, your specific design constraints, and what format you want the answer in.
Principle 4
Structured Output Control
Tell the model exactly how to format its response. "Return a markdown table with columns: Signal Name, Display Zone, Priority Level, Color, Animation." Without format instructions, you get prose paragraphs that need manual reformatting. With them, you get structured data you can paste directly into specs or spreadsheets.
Principle 5
Zero-Shot First, Then Escalate
Start with the simplest prompt (zero-shot — no examples). If the output isn't right, add one or two examples (few-shot). If that's still insufficient, add step-by-step reasoning instructions (Chain-of-Thought). Don't over-engineer your prompt from the start — many tasks work perfectly with a clear, specific zero-shot instruction.

Putting Principles Into Practice

The five principles above describe what makes a good prompt. Here's the concrete checklist for how to build one:

1
Be Specific

Not "design a dashboard" but "design a DPF regen warning for the Super Great 2025, amber per ISO 2575, RHD Japan market, readable within 1.5-second glance at 85 km/h."

2
Provide Context

Paste the relevant CAN signals, display dimensions, and ISO clauses the model needs. Example: include the DPF-related signals from your DBC file and the ISO 2575 color mandates for caution states.

3
Define Output Format

"Respond as a markdown table" / "Return as JSON" / "Structure as a Figma component spec with layers, colors, and spacing values."

4
Assign a Role

"You are a senior automotive HMI designer familiar with ISO 26262 ASIL B constraints and Japanese commercial vehicle regulations for the right-hand drive market."

5
Iterate

The first output is a draft, not a final answer. Refine with specific feedback: "make the DPF status icon more prominent," "add the signal-lost fallback state," "shorten the warning text to under 20 characters."

The Prompt Skeleton

Role:        Who should the AI act as?
Context:     Vehicle, display, CAN signals, standards
Task:        What exactly should it produce?
Format:      Table, JSON, prose, Figma spec?
Constraints: Glance time, ASIL level, RHD, character limits
Examples:    (Optional) Show the pattern you want

Prompt Anatomy — A Real FUSO Example

Here's how the skeleton maps to a real-world prompt:

Weak Prompt
Make a warning for the DPF filter.
Strong Prompt
Role: You are a UI/UX designer specializing in heavy-duty truck instrument clusters for the Japanese market (right-hand drive).

Context: I'm designing the DPF regeneration warning flow for the Mitsubishi Fuso Super Great's 12-inch digital cluster. The 6R30 diesel engine uses a DPF (Diesel Particulate Filter) that requires periodic regeneration. There are 4 states: inactive, passive regen (automatic), active regen request (driver must pull over), and DPF full (torque derate imminent).

Task: Create a complete DPF regeneration display specification covering all 4 states. For each state, define:
1. Telltale icon description and color (per ISO 2575)
2. Accompanying text string (max 2 lines, max 24 characters per line)
3. Display zone and priority level
4. Animation behavior (static, pulse rate, flash rate)
5. Escalation trigger (what causes transition to next state)

Constraints: All elements must be readable within the ISO 15005 1.5-second glance budget. The active regen request must use the highest available non-safety-critical priority. Colors follow ISO 2575: green=normal, amber=caution, red=danger.

Format: Return as a markdown table with one row per state.

Six Prompt Patterns for Daily HMI Work

1
The Spec Generator

"Given [vehicle model], [powertrain], and [applicable standards], generate a complete [component] specification including [specific fields]. Format as [table/JSON/markdown]."

2
The Compliance Checker

"Review this [design/specification] against [ISO standard]. For each element, state whether it meets the requirement, cite the specific clause, and flag any violations with suggested corrections."

3
The State Matrix Builder

"For the [system name] with states [list states], create a state transition matrix. Include: trigger conditions, display changes at each transition, priority level, and fallback behavior if the CAN signal is lost."

4
The Copywriter

"Write [number] alert text variants for [warning type] on the [vehicle] cluster. Each must be under [character limit], use [language], and follow the [tone] voice. Rate each variant for clarity at highway-speed glance time."

5
The Layout Analyst

"Analyze this cluster screenshot. Identify the information hierarchy, estimate glance-time compliance per zone, and suggest improvements for [specific concern]. Consider that this is a right-hand drive vehicle for the Japan market."

6
The Competitor Benchmarker

"Compare the [competitor vehicle] instrument cluster against our [FUSO model] design for [specific aspect]. Structure the comparison as: feature, their approach, our approach, advantage/gap, recommendation."

Few-Shot Prompting — Teaching By Example

When zero-shot doesn't produce the format or quality you need, show the model what you want with one or two examples:

Few-Shot Prompt I need telltale specifications for FUSO cluster warnings. Follow this exact format: Example: SIGNAL: Engine Oil Pressure Low ICON: Oil can with drop, filled COLOR: Red (per ISO 2575 — immediate danger) PRIORITY: Critical — Zone A (center cluster) ANIMATION: 2 Hz flash, continuous TEXT: "Stop Engine — Oil Pressure" SOUND: Continuous chime, 2400 Hz Now generate specifications for: 1. DPF Regeneration Request (active — pull over) 2. AdBlue Level Critical (<100 km remaining) 3. Battery Cell Imbalance Warning (eCanter)

Chain-of-Thought — Step-by-Step Reasoning

For complex decisions where you need the model to show its work, ask it to reason step by step. This produces higher-quality analysis because the model can catch its own errors mid-reasoning.

Chain-of-Thought Prompt A FUSO Super Great driver is traveling at 85 km/h on a Japanese expressway when both ABA6 (forward collision) and Active Sideguard Assist (blind spot) trigger warnings simultaneously. Think through this step by step: 1. What is the relative severity of each warning? 2. Which takes visual priority on the cluster and why? 3. How should the display transition if ABA6 escalates to emergency braking? 4. What happens to the ASA warning during the ABA6 event? 5. After the ABA6 event clears, how should the display recover? Show your reasoning for each step before giving your final recommendation.
Right-Hand Drive Warning

Japan market = right-hand drive. The driver's primary viewing angle is mirrored compared to LHD markets. When asking an LLM about display zone placement, always specify "right-hand drive, Japan market." Without this, the model defaults to LHD assumptions from its predominantly Western training data, placing critical information in the wrong zones.

Try This Now

Transform a vague prompt into a specific one using the skeleton from this chapter.

Copy → Paste → Replace Brackets Take your weakest recent LLM interaction — one where the output was too generic. Rewrite it using the skeleton: Role: [WHO] / Context: [VEHICLE, DISPLAY, STANDARDS] / Task: [SPECIFIC OUTPUT] / Format: [TABLE/JSON/PROSE] / Constraints: [GLANCE TIME, ASIL, RHD, CHAR LIMITS]. Run both versions and compare.

What to check: The strong version should produce output you can use with minimal editing. If it still requires heavy rework, your context or constraints need more specificity.

Chapter 07Worked Example: eCanter Charging Status Display

This chapter walks through one complete design task from start to finish — prompt, output, verification, iteration, final spec. Every technique from the preceding chapters is applied in sequence.

The Brief

You need to design the charging status display for the eCanter's digital cluster. The display must show: charging state (not connected, AC charging, DC fast charging), current SOC percentage, estimated time to full charge, and charging power in kW. It must work for all three battery configurations (S/M/L: 41/83/124 kWh). The display will be visible when the vehicle is stationary and connected to charging infrastructure, but it also needs a resting state for when no charger is connected. All text, colors, and iconography must comply with ISO 2575 and meet the 1.5-second glance-time budget defined by ISO 16673.

First Prompt Attempt

Using the prompt skeleton from Chapter 06, we construct a detailed first prompt. This is a realistic first attempt — thorough but not yet accounting for every edge case.

Strong Prompt — First Attempt
Role: You are a UI/UX designer specializing in light-duty electric commercial vehicle instrument clusters for the Japanese market (right-hand drive).

Context: I'm designing the charging status display for the Mitsubishi Fuso eCanter digital cluster. The eCanter is a fully electric light-duty truck available in three battery configurations: S (41 kWh), M (83 kWh), and L (124 kWh). It supports both AC charging (Type 2, up to 22 kW) and DC fast charging (CCS2, up to 104 kW). The charging display appears on the instrument cluster when the vehicle is stationary and the charging port is engaged.

Task: Create a complete charging status display specification. Define the following charging states: Not Connected, AC Charging, DC Fast Charging, and Charging Complete. For each state, specify:
1. Telltale icon description and color (per ISO 2575)
2. Primary text string (max 24 characters per line)
3. Secondary information displayed (SOC%, time remaining, power in kW)
4. Animation behavior (static, pulsing, or dynamic fill)
5. Display priority level

Constraints:
— All text must be readable within a 1.5-second glance (per ISO 16673 occlusion method).
— Colors must follow ISO 2575: green = normal/OK, amber = caution, red = danger, white = neutral information.
— Text strings must not exceed 24 characters per line for glance-time compliance.
— The specification must work identically across all three battery configurations (S/M/L); only the numeric values change.
— Right-hand drive layout (Japan market).

Format: Return as a markdown table with one row per charging state. Columns: State, Icon, Color, Primary Text, Secondary Info, Animation, Priority.

Raw LLM Output

The model returns a structured table. At first glance it looks solid — correct format, reasonable content, professional language. But three errors are embedded in the response.

LLM Response — First Pass Here is the charging status display specification for the eCanter digital cluster: eCanter Charging Status Display — State Specification | State | Icon | Color | Primary Text | Secondary Info | Animation | Priority | |---|---|---|---|---|---|---| | Not Connected | Plug outline, disconnected | White | No Charger Connected | SOC: [xx]% | Static | Low — passive info | | AC Charging | Plug with AC wave symbol | Green | AC Charging | SOC: [xx]% · [x]h [xx]m to full · [xx.x] kW | Slow pulse (1 Hz), battery fill animation | Medium — active status | | DC Fast Charging | Plug with lightning bolt | Green | DC Fast Charging in Progress | SOC: [xx]% · [x]h [xx]m to full · [xxx.x] kW | Fast pulse (2 Hz), rapid battery fill animation | Medium-High — active status | | Charging Complete | Plug with checkmark | Blue | Charge Complete | SOC: 100% | Static glow | Low — confirmation | Notes: — The battery fill animation dynamically reflects current SOC level across all states. — Secondary information adapts automatically to the battery configuration (S/M/L); the display logic is identical, only the computed values for time-to-full and max power change. — "DC Fast Charging in Progress" provides clear differentiation from the AC state for drivers who may be unfamiliar with charging types. — The blue color for Charging Complete provides a calming visual indicator that the session has ended successfully. — All text strings are designed for quick comprehension within the glance-time budget.

Verification Pass

Now we apply the verification checklist from Chapter 10. This is the expert's most critical contribution — the model cannot reliably self-verify against domain-specific standards.

Verification Checklist — 3 Errors Found

1. ISO 2575 Color Violation — "Charging Complete" uses blue.
Checking color assignments against ISO 2575: the "Charging Complete" state uses blue, but ISO 2575 mandates green for normal/OK states. A completed charge is a positive, normal condition — it must be green, not blue. Blue is not part of the ISO 2575 standard color vocabulary for vehicle telltales. The model likely borrowed this from consumer electronics conventions (smartphones, laptops) where blue often indicates "full." This is exactly the kind of domain bleed that Chapter 04 warned about.

2. Text String Exceeds 24-Character Limit — "DC Fast Charging in Progress" = 28 characters.
Counting characters in each primary text string: "No Charger Connected" = 20 chars (OK), "AC Charging" = 11 chars (OK), "DC Fast Charging in Progress" = 28 chars (FAIL — exceeds the 24-character constraint by 4 characters), "Charge Complete" = 15 chars (OK). The model added "in Progress" for clarity but violated the explicit constraint. Needs to be shortened to 24 characters or fewer — e.g., "DC Fast Charging" (16 chars).

3. Missing Edge Case — No "Charging Interrupted" state.
The specification covers the happy path only. There is no state for a charging session that is unexpectedly interrupted — cable disconnection during charge, EVSE fault, ground fault, communication error between the vehicle and charger. In the real world, charging interruptions are common and the driver must be clearly informed. A "Charging Interrupted" state with an amber caution indicator is required. The model was not asked for it explicitly, but an expert knows it is essential for any production charging display.

Iteration Prompt

We send a targeted follow-up correcting all three errors and adding the missing state. Notice: we don't re-explain the entire context. We reference the prior output and provide specific corrections.

Correction Prompt — Targeted Iteration
Your charging status specification has three issues that need correction, plus one missing state:

1. Color fix: "Charging Complete" must use green, not blue. ISO 2575 does not include blue in the standard telltale color set. A completed charge is a normal/OK condition = green.

2. Text length fix: "DC Fast Charging in Progress" is 28 characters — exceeds the 24-character-per-line constraint. Shorten it to 24 characters or fewer while keeping it clearly distinct from the AC state.

3. Missing state: Add a "Charging Interrupted" state for unexpected disconnection, EVSE faults, or communication errors. This should use amber (caution per ISO 2575), include a prompt to check the connection, and have a higher priority than the normal charging states.

Please regenerate the complete table with all corrections applied.

Corrected Output

The model returns a revised specification with all corrections applied.

LLM Response — Corrected eCanter Charging Status Display — Revised Specification | State | Icon | Color | Primary Text | Secondary Info | Animation | Priority | |---|---|---|---|---|---|---| | Not Connected | Plug outline, disconnected | White | No Charger Connected | SOC: [xx]% | Static | Low | | AC Charging | Plug with AC wave symbol | Green | AC Charging | SOC: [xx]% · [x]h [xx]m to full · [xx.x] kW | Slow pulse (1 Hz), battery fill | Medium | | DC Fast Charging | Plug with lightning bolt | Green | DC Fast Charging | SOC: [xx]% · [x]h [xx]m to full · [xxx.x] kW | Fast pulse (2 Hz), rapid battery fill | Medium-High | | Charging Interrupted | Plug with exclamation mark | Amber | Charging Interrupted | SOC: [xx]% · Last power: [xx.x] kW | 2 Hz flash | High | | Charging Complete | Plug with checkmark | Green | Charge Complete | SOC: 100% | Static | Low | Corrections applied: — "Charging Complete" color changed from blue to green (ISO 2575 compliance). — "DC Fast Charging in Progress" (28 chars) shortened to "DC Fast Charging" (16 chars). — "Charging Interrupted" state added with amber indicator and high priority. — Interrupted state text "Charging Interrupted" = 20 chars (within limit).

Devil's Advocate

Before finalizing, we use the devil's advocate technique to stress-test a design decision. We pick one choice that seems reasonable and ask the model to argue against it.

Devil's Advocate Prompt You specified that both AC Charging and DC Fast Charging use the same green color, differentiated only by text and animation speed. Argue against this decision. What are the strongest reasons these two states should use different visual treatments? Consider driver confusion, urgency differences, and the possibility that a driver connects to DC expecting fast charge speeds but is actually receiving AC rates due to a fallback.
Devil's Advocate Response Three arguments against using identical green for both charging modes: 1. Fallback misidentification. If a driver connects to a DC fast charger but the vehicle falls back to AC charging due to a CCS communication error, identical green indicators mean the driver may not notice for 30+ minutes. The power reading (e.g., 6.6 kW vs. 50 kW) is a secondary indicator that requires reading numbers — not a glance-time-compliant differentiator. A distinct visual treatment for DC (such as a unique icon animation pattern) would make fallback scenarios immediately obvious. 2. Time expectation mismatch. DC fast charging on the L-pack (124 kWh) from 20% to 80% takes roughly 60 minutes. AC charging for the same range takes 6+ hours. These are fundamentally different operational commitments. Using the same color implicitly tells the driver "these are equivalent" when they are not. 3. Fleet operations context. The eCanter is a commercial vehicle. Fleet dispatchers may walk through a depot checking charge status across dozens of vehicles. At a distance, color differentiation between AC and DC is the fastest way to assess fleet readiness. Counter-argument: Despite these points, ISO 2575 reserves color meaning for safety severity, not operational modes. Both AC and DC charging are normal operating conditions = green. Introducing a non-standard color distinction would violate the standard. The stronger solution is to keep both green but use clearly distinct icon animations and ensure the power readout (kW) is prominently sized as a secondary differentiator.

Final Specification

After verification, iteration, and adversarial review, we have a production-ready specification. The devil's advocate pass confirmed the color decision is correct per ISO 2575 but surfaced the fallback scenario, which we address by adding a note about kW display prominence.

eCanter Charging Status Display — Final Specification

Applies to all battery configurations: S (41 kWh), M (83 kWh), L (124 kWh). All numeric values are dynamically computed; display logic is identical across configurations.

StateIconColorPrimary TextSecondary InfoAnimationPriority
Not ConnectedPlug outline, disconnectedWhiteNo Charger ConnectedSOC: [xx]%StaticLow
AC ChargingPlug with AC wave symbolGreenAC ChargingSOC: [xx]% · [x]h [xx]m remaining · [xx.x] kWSlow pulse (1 Hz), battery fill animationMedium
DC Fast ChargingPlug with lightning boltGreenDC Fast ChargingSOC: [xx]% · [x]h [xx]m remaining · [xxx.x] kWFast pulse (2 Hz), rapid battery fill animationMedium-High
Charging InterruptedPlug with exclamation triangleAmberCharging InterruptedSOC: [xx]% · Last power: [xx.x] kW2 Hz flashHigh
Charging CompletePlug with checkmarkGreenCharge CompleteSOC: 100%StaticLow

Character count verification: No Charger Connected (20) · AC Charging (11) · DC Fast Charging (16) · Charging Interrupted (20) · Charge Complete (15). All within 24-character limit.

Design note: The charging power value (kW) should be displayed at a minimum font size of 32 dp to serve as a glance-time-compliant differentiator between AC and DC modes, addressing the fallback scenario identified during adversarial review.

Takeaway

Total time: approximately 15 minutes. Without LLM: 2–3 hours of manual specification writing and standards cross-referencing. The LLM did not produce a perfect first draft — it produced a strong starting point that expert review refined into a usable specification.
— The value is not in blind acceptance. It is in the speed of the review-and-iterate cycle.
Pattern Recap

This worked example used five techniques in sequence: (1) structured prompt with the Role/Context/Task/Constraints/Format skeleton, (2) domain-expert verification against ISO 2575 and glance-time standards, (3) targeted iteration with specific corrections, (4) devil's advocate adversarial review, and (5) final specification lockdown with character-count verification. This is the workflow. Every HMI design task through an LLM should follow this pattern — or a deliberate subset of it when time is constrained.

Chapter 08Advanced Techniques & Context Engineering

Beyond basic prompting, advanced techniques let you extract consistently higher-quality results, manage long design sessions, and tackle complex multi-part HMI specifications that would be difficult with single prompts.

Self-Consistency Through Multiple Passes

For critical specifications, don't rely on a single model response. Run the same prompt 3 times and compare results. Where all three agree, you have high confidence. Where they diverge, that's exactly where you need human expert judgment. This technique is especially valuable for:

  • Warning priority rankings — does the model consistently rank ABA6 above DPF warnings?
  • Glance-time estimates — are the timing recommendations consistent?
  • Color assignments — does it always map to ISO 2575 correctly?

Reflection & Self-Correction

Ask the model to review its own output before you accept it. This simple technique catches errors the model would miss in a single pass.

Reflection Pattern [After receiving output] Now review your own specification above. Check for: 1. Any ISO 2575 color violations 2. Any text strings exceeding 24 characters per line 3. Any state transitions that could leave the display in an undefined state 4. Any conflicts between simultaneous warning priorities List every issue you find, then provide a corrected version.

Devil's Advocate Technique

Force the model to argue against its own recommendation. This surfaces edge cases and risks you might not have considered.

Devil's Advocate Prompt You just recommended placing the SOC indicator in the left cluster zone for the eCanter. Now argue against this placement. What are the strongest reasons this could be wrong? Consider: - Right-hand drive viewing angles - Competition with ADAS warnings for visual attention - Low-light readability - Driver expectations coming from diesel FUSO models Give me your three strongest counterarguments.

Context Engineering — Managing Long Sessions

In extended design sessions, the conversation grows and the model's earlier context gets compressed or lost. This is context drift — the model gradually "forgets" constraints you set at the beginning. Symptoms:

  • The model stops applying your design system rules
  • Outputs start contradicting earlier specifications
  • Format consistency degrades
  • The model forgets which vehicle model or powertrain you're designing for

The WSCI Framework

Four strategies for managing context effectively — think of them as memory management for LLMs:

Write — Save for Later

Persist key decisions outside the conversation. Save successful prompt templates, finalized specifications, and design decisions to files. Retrieve them when starting new sessions instead of re-explaining everything.

Select — Pull Only What's Relevant

Don't paste your entire DBC file. Extract the 15 signals relevant to the current display zone. Include only the ISO clauses that apply to this specific task. Surgical context selection beats brute-force dumping.

Compress — Summarize Long Context

Before a long conversation drifts, summarize what's been decided: "So far we've defined: DPF has 4 states, amber/red per ISO 2575, 200ms render budget. Now let's tackle the animation timing."

Isolate — Split Complex Tasks

One massive prompt handling telltales + layout + state machines + translations will underperform four focused prompts, each with its own tailored context. Break your HMI spec into sub-tasks.

The LLM is a CPU. The context window is RAM. You are the operating system — deciding what information to load, when, and in what format. The quality of your memory management directly determines the quality of the model's output.
Andrej Karpathy

System Prompts — Your Persistent Persona

A system prompt sets the model's behavior for the entire conversation. For FUSO HMI work, a well-crafted system prompt eliminates repetitive context-setting.

You are an expert UI/UX designer for Mitsubishi Fuso heavy-duty
truck instrument clusters. You work within these constraints:

Vehicle: Super Great (6R30 diesel, SHIFTPILOT 12-speed AMT)
Market: Japan (right-hand drive)
Standards: ISO 15005 (1.5s glance), ISO 26262 ASIL B (200ms
  telltale latency), ISO 2575 (symbol colors), UNECE R121
Display: Digital cluster, 200ms render budget
Protocol: CAN/CAN FD (J1939), CRC-protected messages

Rules:
- All telltale colors follow ISO 2575 (red/amber/green)
- All text fits within 1.5-second glance budget
- Right-hand drive zone placement (mirror LHD assumptions)
- Flag any specification as UNVERIFIED if based on general
  knowledge rather than confirmed FUSO data
- CAN signal names are illustrative — always note this

Context Window Strategy

Think of your context window as a budget. Here's how to spend it wisely:

CONTEXT WINDOW BUDGET System Prompt Standards Reference Material Your Question Response Space Total: 200,000 tokens (Claude) — everything must fit Place critical info at the START and END of context — not buried in the middle Leave 20-30% of the window for the model's response 1 token ≈ 0.75 words · 1 page ≈ 400 tokens · Full ISO doc section ≈ 10,000 tokens
The Goldilocks Principle Applied

Too little: "Design a warning." → Generic, useless output.
Too much: Pasting 5 complete ISO standards + entire DBC file + full design system doc → Model loses focus, key requirements get buried.
Just right: Relevant ISO clauses + applicable CAN signals + specific design constraints + clear question → Precise, actionable output.

Try This Now

Use the reflection pattern to catch errors in a previous LLM output.

Copy → Paste → Replace Brackets Review the specification you just generated. Check for: (1) any ISO 2575 color violations, (2) any text strings exceeding [YOUR CHARACTER LIMIT] characters, (3) any state transitions that could leave the display in an undefined state, (4) any conflicts between simultaneous warning priorities. List every issue found, then provide a corrected version.

What to check: Did the reflection actually catch real issues, or did it just say "looks good"? If it found nothing, deliberately introduce an error and re-run to verify the technique works.

Chapter 09Claude Cowork for HMI Design

Claude Cowork (launched January 2026) represents a shift from conversation to autonomous action. Instead of asking Claude questions and receiving text answers, Cowork can independently browse the web, read and write files, execute multi-step research workflows, and iterate on its own results — all while you continue other work.

The Autonomy Slider

Think of AI assistance as a sliding scale, not an on/off switch:

THE AUTONOMY SLIDER Autocomplete Finishes your sentence Chat Q&A conversation Cowork Autonomous with oversight Full Agent Independent execution

For FUSO HMI design work, the Cowork position is the sweet spot. The model works autonomously on well-defined tasks while you retain oversight and final approval. Full agent mode is for development and testing workflows, not safety-critical specification work.

What Cowork Can and Can't Access

Cowork CAN
  • Browse public websites and press archives
  • Read files you upload or paste into the conversation
  • Write new files (specs, reports, tables)
  • Execute multi-step web research with source citations
  • Cross-reference documents you provide against each other
  • Generate and iterate on text, tables, and structured data
Cowork CANNOT
  • Access your Figma files (unless you export and upload screenshots/SVGs)
  • Read your company Confluence, SharePoint, or internal wikis
  • Open proprietary DBC/CAN database files (unless you paste relevant sections)
  • Access your email, Slack, or internal tools
  • Remember context from previous conversations (unless you re-provide it)
  • Verify information against paywalled standards documents it can't access
The Upload Bridge

The gap between "can" and "cannot" is bridged by you uploading or pasting the relevant data. Cowork becomes dramatically more useful when you provide it with: exported Figma frames as PNGs, relevant CAN signal excerpts from your DBC file, specific ISO standard sections (copy-paste the relevant clauses), and your internal design system documentation. Think of Cowork as a powerful analyst who just started at your company — brilliant but needs onboarding materials.

The Agent Loop

When Cowork operates autonomously, it follows a structured loop:

Plan Analyze task Act Execute step Observe Check result Adjust Refine approach Repeat Until complete Loop until task is complete or human intervention needed

Seven Cowork Use Cases for FUSO HMI

1
Competitor Cluster Research

"Browse automotive press sites and create a comparison matrix of 2024-2025 heavy-duty truck digital clusters from Actros, Volvo FH, Scania S, and DAF XG+. Include: display size, resolution, information layout approach, ADAS visualization method, and night mode implementation."

2
ISO Standard Cross-Reference

"Read my DPF warning specification file and cross-reference every element against ISO 2575:2021 Annex A and ISO 15005:2017 clause 5.2. Flag violations and suggest corrections. Save results to a new file." (requires: upload your spec file AND paste the relevant ISO clauses)

3
Design System Audit

"Review all telltale specifications in our design system folder. Check for: color consistency with ISO 2575, animation rate consistency across warning levels, text length compliance, and icon uniqueness. Generate an audit report." (requires: upload all telltale spec files to the conversation)

4
CAN Signal Mapping

"Parse the attached DBC file. Extract all signals relevant to the instrument cluster. For each signal, determine: display zone, update rate, value range, and what the driver should see. Format as a signal-to-display mapping table." (requires: upload or paste your DBC file)

5
Multi-Language Alert Validation

"Take our warning text specifications and verify the Japanese translations fit within the character limit for each display zone. Check that no Japanese string exceeds the allocated pixel width at 24px font size. Flag any that overflow."

6
State Machine Completeness Check

"Review the ADAS state machine I've defined. Verify that every state has a defined entry condition, exit condition, display specification, and fallback behavior. Identify any unreachable states or missing transitions. Generate a DOT graph of the complete state machine."

7
ARCHION Transition Research

"Research the ARCHION Corporation formation timeline (FUSO + Hino, Daimler Truck + Toyota). Find published information about shared platform implications for instrument cluster design. Focus on whether existing FUSO or Hino HMI design systems will be unified."

The 80/20 Rule

Focus 80% of your LLM usage on the 20% of tasks that consume the most time: specification writing, compliance checking, research aggregation, and state machine validation. Don't use Cowork for tasks that take 2 minutes by hand — the overhead of writing a good prompt exceeds the time saved.

Try This Now

Delegate a research task that would take you 30+ minutes manually.

Copy → Paste → Replace Brackets Research the instrument cluster designs of [COMPETITOR 1], [COMPETITOR 2], and [COMPETITOR 3] heavy-duty trucks (2024-2025 models). For each, find: display type (analog/digital/hybrid), approximate display size, ADAS visualization approach, and night mode implementation. Compile into a comparison table. Cite your sources.

What to check: Verify at least 3 specific claims against manufacturer press materials or automotive press reviews. Cowork research is a starting point — not a finished deliverable.

Chapter 10Verification & Safety Protocols

Every LLM output in your HMI workflow must be verified before it enters a specification, gets committed to a design file, or influences a safety-relevant decision. This isn't optional — it's the difference between using LLMs productively and introducing silent errors into safety-critical vehicle systems.

Treat every LLM output as a smart intern's first draft — impressive in scope and structure, but absolutely requiring expert review before it goes anywhere near production.
Core principle for safety-critical HMI work

Why Verification Matters More Here

Unlike web design or marketing copy, errors in truck instrument cluster specifications can have physical safety consequences. A wrong color mapping for a brake warning. An incorrect priority ranking that suppresses a critical alert. A telltale timing that violates ASIL B requirements. These aren't cosmetic bugs — they affect driver safety.

Low Stakes

Marketing copy, brainstorming, competitor research summaries, internal documentation formatting

Verification: Quick sanity check

Medium Stakes

Layout proposals, icon design suggestions, CAN signal mapping drafts, non-safety UI text

Verification: Cross-reference against design system + peer review

High Stakes

Telltale color/priority specs, ASIL B timing values, ADAS warning escalation logic, regulatory compliance claims

Verification: Manual expert verification against primary sources + team review + testing

The Three Verification Traps

1. Sycophancy — The Model Agrees With You

LLMs have a strong tendency to agree with whatever premise you present. If you say "The DPF warning should be green, right?" the model is likely to say "Yes, that makes sense" — even though ISO 2575 mandates amber for caution states. This is sycophancy, and it's one of the most dangerous failure modes in safety-critical work.

How to Counter Sycophancy

  • Ask neutral questions: "What color should the DPF warning be?" — not "The DPF warning should be green, right?"
  • Invite disagreement: Add "Challenge any assumptions in my question that may be incorrect" to your prompt
  • Test with wrong premises: Deliberately include an error and see if the model catches it. If it doesn't, your verification process needs to be stronger
  • Use the Devil's Advocate technique from Chapter 08 after every critical recommendation

2. Automation Bias — Trusting Because It's AI

When output is well-formatted and confidently stated, humans tend to accept it without scrutiny. A table with perfect markdown formatting, clear headers, and professional language looks authoritative — even if the content contains errors. The more polished the output, the more carefully you need to verify the substance.

How to Counter Automation Bias

  • Verify the hardest items first: Start with the cells in the table that require the most domain knowledge — those are where errors hide
  • Check specific numbers: Timing values, character counts, pixel dimensions — verify these against primary sources, not the model's output
  • Separate format from content: A beautifully formatted wrong answer is still wrong

3. Consistency Illusion — Same Input, Different Output

The same prompt can produce different results each time due to randomness in token sampling. You might run a compliance check today and get "all clear," then run the identical check tomorrow and get three violations flagged. For critical specifications:

  • Run the same verification prompt at least 3 times
  • Set temperature to 0 for maximum consistency
  • Treat any inconsistency across runs as a flag requiring human expert review

The Verification Checklist

Apply this checklist to every LLM output that will enter a specification document:

CheckQuestionRed Flag
SourceCan I trace this claim to a primary source (ISO doc, DBC file, MFTBC spec)?Model cites a standard clause that doesn't exist
ColorDo all telltale colors match ISO 2575 mandates?Green used for a caution state, amber for informational
TimingDo all latency values meet ASIL B requirements?Render time > 200ms, unmotivated response > 100ms
GlanceCan each display element be read in ≤ 1.5 seconds?Text strings exceeding 3 lines or 24+ characters with complex terminology
PriorityIs the warning priority ranking consistent with safety criticality?Convenience alerts ranked above safety warnings
RHDAre zone placements correct for right-hand drive?Critical info placed assuming left-hand drive
CANAre signal names marked as illustrative/unverified?Model presents invented signal names as factual
CompletenessAre all states covered, including edge cases and failures?Missing "signal lost" or "sensor fault" states
Non-Negotiable Rule

No LLM-generated specification for safety-critical display elements (ASIL B telltales, ADAS warnings, brake system indicators) should ever be used without manual verification against the primary standard documents and internal MFTBC specifications. The LLM accelerates your work — it does not replace your engineering judgment.

Try This Now

Stress-test the verification checklist on a real LLM output.

Copy → Paste → Replace Brackets Take any LLM-generated specification from your recent work. Run it through every row of the Verification Checklist table in this chapter. For each check, record: Pass / Fail / Unable to verify. The items you mark "unable to verify" are your highest risk.

What to check: This is a meta-exercise — you're verifying your own verification process. If most items are "unable to verify," you need better access to primary source documents.

Chapter 11ARCHION, Coretura & SDV-Era HMI

The Japanese heavy-duty truck industry is undergoing its largest structural transformation in decades. Two major partnerships will reshape the competitive and technical landscape that your HMI designs must navigate.

ARCHION Corporation (April 2026)

ARCHION Corporation is a holding company that brings Mitsubishi Fuso (Daimler Truck) and Hino Motors (Toyota Motor) under a single corporate umbrella. FUSO and Hino continue to operate as separate brands — this is not a brand merger, but a structural unification creating Japan's largest commercial vehicle group. For HMI designers, the implications are significant: potential design system convergence between FUSO and Hino product lines, shared ADAS platforms requiring consistent warning interfaces, and a combined product portfolio spanning light-duty (eCanter, Dutro) through heavy-duty (Super Great, Profia).

Coretura AB

Coretura AB is a separate joint venture between Daimler Truck and Volvo Group focused on developing a shared Software-Defined Vehicle (SDV) platform. This is purely a software collaboration — the companies remain competitors in vehicle sales. The SDV platform will provide the middleware and software architecture that future truck instrument clusters run on.

Toyota Motor Parent of Hino Daimler Truck Parent of FUSO Volvo Group Volvo Trucks · Renault · Mack ARCHION Corporation FUSO + Hino · April 2026 Coretura AB SDV Software Platform Shared Vehicles · Unified Design System Shared Software · SDV Stack
Two parallel partnerships: ARCHION unifies vehicles and design, Coretura unifies software. Daimler Truck sits at the center of both.

What SDV Means for HMI Design

Software-Defined Vehicles separate hardware from software update cycles. For instrument cluster designers, this means:

  • Over-the-air updates: HMI changes can ship after vehicle delivery. Your design must handle versioning — different trucks in the same fleet may display different UI versions
  • Shared rendering platform: Coretura's middleware may standardize the graphics pipeline across Daimler and Volvo products. Your FUSO-specific design system will need to work within this shared framework
  • API-driven displays: Instead of hardcoded screen layouts, clusters may consume display data from a standardized API — making your design specifications more like design tokens and less like pixel-perfect mockups
  • Longer design lifecycles: When software updates can change the UI independently of hardware, your initial design must accommodate years of iterative updates while maintaining visual and functional consistency

Hydrogen & Next-Gen Powertrains

The Hino Profia Z FCV (hydrogen fuel cell) joins the ARCHION product family, adding a third powertrain type to your HMI scope. Hydrogen-specific displays require: tank pressure per bank (6x 70MPa), fuel cell stack temperature, hydrogen leak warnings (safety-critical — immediate action required), and dual-source range estimation. Your design system needs to be modular enough to accommodate this without a ground-up redesign.

LLMs as a Research Tool for Industry Transitions

Use LLMs to stay current on ARCHION and Coretura developments. Set up periodic Cowork research tasks to scan industry press, Daimler Truck investor presentations, and Toyota corporate announcements for HMI-relevant updates. The model can synthesize information from multiple sources faster than manual monitoring — but always verify key claims against primary corporate communications.

Try This Now

Use an LLM to research a fast-moving industry development.

Copy → Paste → Replace Brackets What is the latest publicly available information about [ARCHION Corporation / Coretura AB] as of [CURRENT MONTH AND YEAR]? Focus on: timeline updates, organizational structure decisions, and any announced implications for vehicle HMI or instrument cluster design. Cite specific sources and dates.

What to check: Cross-reference against Daimler Truck and Toyota investor relations pages. Industry mergers generate speculation — only trust claims linked to official corporate communications.

Chapter 12Expert Workflow & Checklist

You've now covered the complete foundation: the vehicle platforms, the regulations, how LLMs work, how to prompt them effectively, how to use advanced techniques, how to verify their output, and how the industry is evolving. This final chapter puts it all together into a practical daily workflow.

AI doesn't replace thinking — it amplifies it. The quality of your output is still determined by the quality of your thinking. LLMs let you execute faster, explore more options, and catch more errors — but only if you bring the domain expertise and critical judgment.
The guiding principle

Start Small, Then Compound

Don't try to revolutionize your entire workflow on day one. Start with one use case — perhaps using an LLM to draft alert text for a single warning type. Master that. Then expand to specification tables. Then to compliance checking. Each successful use case builds your prompt engineering intuition and your confidence in verification. This is compounding — small, consistent improvements that accumulate into transformative workflow change.

The Daily Workflow

Step 1
Set Your Context
Begin each session with your system prompt (Chapter 08). Specify vehicle model, powertrain, market, applicable standards. This eliminates 80% of generic/wrong outputs.
Step 2
Draft with LLM
Use prompt patterns from Chapter 06 to generate first-draft specifications, state matrices, alert text, or layout analyses. Start with zero-shot; escalate to few-shot or CoT only if needed.
Step 3
Verify Against Sources
Apply the verification checklist from Chapter 10. Check every safety-critical value against primary sources. Use self-consistency (run 3 times) for critical specs. Use the Devil's Advocate technique for design decisions.
Step 4
Iterate and Refine
Use reflection prompts to have the model review its own output. Adjust your prompt based on what worked and what didn't. Save successful prompts as templates for reuse.
Step 5
Delegate Research to Cowork
For time-consuming research, competitor analysis, or multi-file audits — hand off to Cowork and continue other work. Review results when they return.

Common Mistakes to Avoid

Trusting Without Verifying

The number one mistake. LLM output looks authoritative even when wrong. Always verify safety-critical specifications against primary sources.

Under-Specifying Prompts

"Design a warning" will produce generic output. "Design a DPF regen warning for the Super Great, amber per ISO 2575, RHD Japan market" produces usable output.

Over-Prompting Simple Tasks

Not every task needs a 500-word prompt with few-shot examples and chain-of-thought. Match your prompt complexity to the task complexity.

Ignoring Context Drift

Long conversations degrade quality. Break sessions by topic. Re-state critical constraints periodically. Don't assume the model remembers your requirements from 30 messages ago.

Using Wrong Model Tier

Flagship models for simple formatting tasks waste time and money. Small models for complex compliance analysis miss critical nuances. Match the model to the task.

Not Saving Successful Prompts

When a prompt produces excellent output, save it as a template. Building a personal prompt library compounds your efficiency over time.

Your Expert Checklist

Pin this to your workstation. Before sending any LLM-generated specification forward:

Context set: Vehicle model, powertrain, market, and standards specified in prompt
ISO 2575 colors verified: Red = danger, amber = caution, green = normal — no exceptions
ASIL B timing confirmed: ≤ 200ms telltale display, ≤ 100ms unmotivated response
Glance budget respected: All elements readable within 1.5-second single glance (ISO 15005)
RHD zones correct: Zone placement verified for right-hand drive Japan market
CAN signals marked: All signal names flagged as illustrative/unverified unless from actual DBC
Edge cases covered: Signal lost, sensor fault, simultaneous warnings, CAN bus failure states defined
Hallucination check: No display specs, CAN signal names, or standard clauses taken at face value
Self-consistency passed: Critical specifications verified by running prompt 3 times and comparing
Peer reviewed: Another team member has reviewed the LLM-generated specification

Quick-Reference: Before & After Every Prompt

Before You Prompt
  1. What exactly do I need? (Be specific in your own mind first)
  2. What context does the model need that it doesn't have?
  3. What format do I want the output in?
  4. What's the quality bar? (First draft vs. final spec)
  5. How will I verify the output?
After You Get Output
  1. Does this match what I actually needed?
  2. Are the key facts verifiable against primary sources?
  3. Is the reasoning sound? (Follow the logic)
  4. What's missing that I should add?
  5. If iterating: what specific feedback should I give?

Measuring Your Impact

Track these metrics to quantify how LLMs improve your workflow:

  • Specification draft time: How long from blank page to first complete draft? (Target: 50-70% reduction)
  • Compliance review coverage: How many standards clauses checked per review? (Target: 2-3x increase)
  • Iteration cycles: How many prompt-adjust-verify loops to reach final spec? (Decreases as prompt skills improve)
  • Error rate: Errors caught in verification per specification. (Should decrease as you build better prompts and system contexts)
The goal isn't to replace your expertise with AI. The goal is to spend less time on the mechanical parts of specification work — drafting, formatting, cross-referencing — so you can spend more time on the parts that require human judgment: making design decisions that keep drivers safe.
Try This Now

Build your personal system prompt.

Copy → Paste → Replace Brackets Using the template from Chapter 08, write a system prompt customized for your specific role. Include: your vehicle model(s), your market, your applicable standards, your display constraints, and your CAN protocol details. Save it as a text file and use it to start every new LLM session this week.

What to check: After a week of using your system prompt, note which instructions the LLM follows consistently and which it ignores. Refine accordingly.

Glossary

Terms used throughout this reference. Sorted by category.

Vehicle & Powertrain
ADASAdvanced Driver Assistance Systems — automated safety features like emergency braking, blind spot detection, lane departure warnings
AMTAutomated Manual Transmission — clutchless manual gearbox (FUSO uses "SHIFTPILOT" brand name)
CAN / CAN FDController Area Network — the communication bus connecting ECUs in the vehicle. CAN FD is the faster variant (up to 2 Mbps data phase)
DBCDatabase Container — the file format that defines CAN signal names, IDs, byte positions, and scaling factors. Proprietary per manufacturer
DPFDiesel Particulate Filter — traps soot from diesel exhaust; requires periodic "regeneration" (burning off accumulated soot)
ECUElectronic Control Unit — any embedded computer module in the vehicle (engine ECU, ADAS ECU, body ECU, etc.)
J1939SAE standard protocol for CAN communication in heavy-duty vehicles. Defines message IDs and data formats
RHD / LHDRight-Hand Drive / Left-Hand Drive — refers to driver position. Japan market = RHD
SCRSelective Catalytic Reduction — exhaust aftertreatment using AdBlue (urea) to reduce NOx emissions
SOCState of Charge — battery level as a percentage (eCanter)
V2XVehicle-to-Everything communication — wireless data exchange between vehicle and infrastructure, other vehicles, or grid
Safety & Standards
ASILAutomotive Safety Integrity Level — ISO 26262 risk classification. ASIL A (lowest) to ASIL D (highest). Instrument clusters are typically ASIL B
ISO 2575International standard defining telltale symbols and mandated colors for vehicle displays
ISO 15005Standard for in-vehicle dialogue management — establishes principles for driver-display interaction
ISO 16673Occlusion method standard — source of the 1.5-second maximum single-glance time measurement
ISO 26262Functional safety standard for road vehicles — defines ASIL levels and safety requirements
UNECE R121UN regulation specifying required telltale symbols and identification rules
TelltaleAn indicator light/icon on the instrument cluster that communicates a vehicle status or warning to the driver
LLM & AI
Context windowThe maximum amount of text (measured in tokens) an LLM can process in a single conversation
Few-shotProviding the LLM with 1-3 examples of the desired output format before your actual request
HallucinationWhen an LLM generates plausible-sounding but factually incorrect information
RLHFReinforcement Learning from Human Feedback — a training technique that aligns model outputs with human preferences
SycophancyThe tendency of LLMs to agree with the user's stated position even when it's incorrect
TemperatureA parameter controlling output randomness. Low (0-0.2) = deterministic; High (0.7-1.0) = creative
TokenThe sub-word unit LLMs process. ~1 token ≈ 0.75 English words. "eCanter" might be 3 tokens
Zero-shotAsking the LLM to perform a task with no examples — just a clear instruction

Appendix: Prompt Library

Every reusable prompt template from this reference, collected for quick access. Replace [BRACKETED TERMS] with your project specifics.

Spec Generation: The Spec Generator (Ch06) Given [VEHICLE MODEL], [POWERTRAIN], and [APPLICABLE STANDARDS], generate a complete [COMPONENT] specification including [SPECIFIC FIELDS]. Format as [TABLE/JSON/MARKDOWN].
Compliance: The Compliance Checker (Ch06) Review this [DESIGN/SPECIFICATION] against [ISO STANDARD]. For each element, state whether it meets the requirement, cite the specific clause, and flag any violations with suggested corrections.
State Machines: The State Matrix Builder (Ch06) For the [SYSTEM NAME] with states [LIST STATES], create a state transition matrix. Include: trigger conditions, display changes at each transition, priority level, and fallback behavior if the CAN signal is lost.
Copywriting: The Copywriter (Ch06) Write [NUMBER] alert text variants for [WARNING TYPE] on the [VEHICLE] cluster. Each must be under [CHARACTER LIMIT], use [LANGUAGE], and follow the [TONE] voice. Rate each variant for clarity at highway-speed glance time.
Layout Analysis: The Layout Analyst (Ch06) Analyze this cluster screenshot. Identify the information hierarchy, estimate glance-time compliance per zone, and suggest improvements for [SPECIFIC CONCERN]. Consider that this is a right-hand drive vehicle for the Japan market.
Competitor Analysis: The Competitor Benchmarker (Ch06) Compare the [COMPETITOR VEHICLE] instrument cluster against our [FUSO MODEL] design for [SPECIFIC ASPECT]. Structure the comparison as: feature, their approach, our approach, advantage/gap, recommendation.
System Prompt: FUSO HMI Persona (Ch08) You are an expert UI/UX designer for Mitsubishi Fuso heavy-duty truck instrument clusters. You work within these constraints: Vehicle: [SUPER GREAT / eCANTER] ([POWERTRAIN DETAILS]) Market: [JAPAN / EXPORT] ([RHD/LHD]) Standards: ISO 15005 (1.5s glance), ISO 26262 ASIL B (200ms telltale latency), ISO 2575 (symbol colors), UNECE R121 Display: Digital cluster, 200ms render budget Protocol: CAN/CAN FD (J1939), CRC-protected messages Rules: - All telltale colors follow ISO 2575 (red/amber/green) - All text fits within 1.5-second glance budget - [RHD/LHD] zone placement - Flag any specification as UNVERIFIED if based on general knowledge rather than confirmed FUSO data - CAN signal names are illustrative — always note this
Reflection: Self-Review Pattern (Ch08) Now review your own specification above. Check for: 1. Any ISO 2575 color violations 2. Any text strings exceeding [CHARACTER LIMIT] characters per line 3. Any state transitions that could leave the display in an undefined state 4. Any conflicts between simultaneous warning priorities List every issue you find, then provide a corrected version.
Devil's Advocate: Counterargument Pattern (Ch08) You just recommended [DESIGN DECISION]. Now argue against this. What are the strongest reasons this could be wrong? Consider: - Right-hand drive viewing angles - Competition with ADAS warnings for visual attention - Low-light readability - Driver expectations coming from [PREVIOUS MODEL/COMPETING MODEL] Give me your three strongest counterarguments.
Screenshot Review: Figma Layout Feedback (Ch05) I've attached a screenshot of my instrument cluster layout for the [VEHICLE MODEL]. This is a [RIGHT/LEFT]-hand drive vehicle for the [MARKET] market. Review this layout for: (1) ISO 15005 glance-time compliance — can each information group be read in ≤1.5 seconds? (2) Information hierarchy — is the most critical data (speed, active warnings) the most visually prominent? (3) Zone balance — is the layout optimized for the driver's primary viewing angle? Give specific, actionable feedback on what to change and why.
Competitor Photos: Structured Comparison (Ch05) I've attached [NUMBER] instrument cluster photos from these trucks: [LIST MODELS]. Create a structured comparison table with columns: Feature, [Model 1] Approach, [Model 2] Approach, [Model 3] Approach, Best Practice Winner. Compare: information hierarchy, color usage, ADAS visualization method, typography/readability approach, and night mode design (if visible).
Icon Legibility: Telltale Assessment (Ch05) These are telltale icons for a truck instrument cluster viewed at ~800mm distance. At 1.5-second glance time: (1) Which icons could be confused with each other? (2) Which icons lack sufficient contrast for night driving? (3) Which icons might not be recognized by a driver unfamiliar with the specific system? For each issue, suggest a specific fix.
Competitor Benchmark: Matrix Fill (Ch03) I've attached instrument cluster images from [LIST COMPETITOR MODELS]. Fill in this comparison matrix based on what you can observe. For each cell, describe what you see. If something isn't visible in the image, mark it "Not visible." Do not guess. Features to compare: display type (analog/digital/hybrid), ADAS visualization approach, warning escalation style, information hierarchy, night mode approach, EV-specific UI elements. Format as a markdown table.

Try This Now — Exercise Prompts

Exercise: Product Knowledge Test (Ch01) What are the current Mitsubishi Fuso truck models in production as of [CURRENT YEAR]? For each, list: vehicle class (light/medium/heavy), powertrain type, and primary market. Flag anything you're uncertain about.
Exercise: Compliance Matrix (Ch02) I'm designing [YOUR DISPLAY ELEMENT] for the [VEHICLE MODEL] instrument cluster. Create a compliance checklist covering: ISO 15005 (glance time), ISO 26262 ASIL B (latency), ISO 2575 (colors/symbols), and UNECE R121 (telltale requirements). For each standard, state the specific requirement and whether my element is likely affected. Flag any requirements you cannot verify.
Exercise: Warning Escalation (Ch03) Design a [NUMBER]-stage warning escalation for [ADAS SYSTEM NAME] on the Fuso Super Great cluster. For each stage, define: trigger condition, telltale color (per ISO 2575), animation behavior (static/pulse rate/flash rate), text string (max 24 characters), audio alert type, and what causes escalation to the next stage. Format as a markdown table.
Exercise: Hallucination Test (Ch04) Explain the difference between the [FUSO TERM 1] and [FUSO TERM 2] systems on the Mitsubishi Fuso Super Great. Then tell me: how confident are you in this answer, and what's your source? If you're uncertain, say so explicitly rather than guessing.
Exercise: Model Tier Comparison (Ch05) Create a state transition matrix for the [SYSTEM NAME] with these states: [LIST YOUR STATES]. Include: trigger condition for each transition, display changes, priority level, and fallback if the CAN signal is lost.
Exercise: Prompt Improvement (Ch06) Take your weakest recent LLM interaction. Rewrite it using the skeleton: Role: [WHO] / Context: [VEHICLE, DISPLAY, STANDARDS] / Task: [SPECIFIC OUTPUT] / Format: [TABLE/JSON/PROSE] / Constraints: [GLANCE TIME, ASIL, RHD, CHAR LIMITS]. Run both versions and compare.
Exercise: Self-Review (Ch08) Review the specification you just generated. Check for: (1) any ISO 2575 color violations, (2) any text strings exceeding [YOUR CHARACTER LIMIT] characters, (3) any state transitions that could leave the display in an undefined state, (4) any conflicts between simultaneous warning priorities. List every issue found, then provide a corrected version.
Exercise: Competitor Research (Ch09) Research the instrument cluster designs of [COMPETITOR 1], [COMPETITOR 2], and [COMPETITOR 3] heavy-duty trucks (2024-2025 models). For each, find: display type (analog/digital/hybrid), approximate display size, ADAS visualization approach, and night mode implementation. Compile into a comparison table. Cite your sources.
Exercise: Industry Research (Ch11) What is the latest publicly available information about [ARCHION Corporation / Coretura AB] as of [CURRENT MONTH AND YEAR]? Focus on: timeline updates, organizational structure decisions, and any announced implications for vehicle HMI or instrument cluster design. Cite specific sources and dates.