This handbook serves as the definitive guide for conducting a Design Study Lite following the established 9-week methodology. It is a dual-purpose document: a comprehensive research framework for systematic visualization design and a detailed process log for documenting collaborative design evolution.
The methodology is designed to meticulously capture the entire process of collaborative visualization design, from initial domain problem identification to final evaluation and reflection.
Find the handbook at: Design Study Lite Handbook: A Complete Reference
This handbook implements the Design Study "Lite" Methodology organized into five main phases corresponding to a 9-week research cycle:
Project Proposal, Team Charter, Domain Expert Interviews, Problem Identification, Task Abstraction
Initial Sketches, Feedback Sessions, Data Gathering, Digital Prototyping
Interactive Visualization Implementation, Technical Development, Iterative Refinement
Usability Testing, Qualitative Assessment, Feedback Integration
Collaborative Authoring Analysis, Methodology Reflection, Knowledge Synthesis
/Design-Study-Lite-Handbook
├── README.md
├── QUICK-START-GUIDE.md ← Simplified overview for beginners
├── Stage-1-Abstract-Phase/ ← Weeks 1-2: Domain Analysis
│ ├── 1.1-Project-Proposal-Transport-Example.md
│ ├── 1.2-Interview-Log.md
│ ├── 1.3-Task-Abstraction-Log.md
│ └── 1.4-Task-Taxonomies.md
├── Stage-2-Design-Phase/ ← Weeks 3-4: Iterative Design
│ ├── 2.1-Hand-Drawn-Sketch-Log.md
│ ├── 2.2-Data-Cleaning-Log.md
│ ├── 2.3-Digital-Sketch-Figma-Notes.md
│ └── 2.4-Digital-Prototype-VegaLite.json
├── Stage-3-Build-Phase/ ← Weeks 5-6: Implementation
│ ├── 3.1-Interactive-Viz-Transport/
│ │ ├── index.html
│ │ ├── main.js
│ │ └── style.css
│ └── 3.2-Build-Log.md
├── Stage-4-Evaluate-Phase/ ← Weeks 7-8: Usability Assessment
│ ├── 4.1-Usability-Test-Script.md
│ └── 4.2-Usability-Test-Notes.md
├── Stage-5-Post-Design-Study/ ← Week 9: Reflection & Analysis
│ ├── 5.1-Lessons-Learned.md
│ └── 5.2-Key-Elements-of-Collaboration.md
├── Templates/ ← Reusable research instruments
│ ├── Template-Project-Proposal.md
│ └── Template-Task-Abstraction-Log.md
└── Training-Materials/ ← Case studies and examples
└── Case-Studies/
└── Transport-Case-Study-Full/
- Quick Start Guide ← Start here for overview
- [Stage 1: Abstract Phase](Stage-1-Abstract-Phase/README ← Begin methodology
- [Stage 2: Design Phase](Stage-2-Design-Phase/README
- [Stage 3: Build Phase](Stage-3-Build-Phase/README
- [Stage 4: Evaluate Phase](Stage-4-Evaluate-Phase/README
- [Stage 5: Post-Design Study](Stage-5-Post-Design-Study/README
[Complete Transport Case Study →](Training-Materials/Case-Studies/README
- See a full 9-week methodology implementation
- Understand each deliverable with real examples
- Reference when working on your own project
- Templates - Ready-to-use forms and instruments
- Research Questions Reference - Evaluation framework
Focus: Domain problem characterization, semi-structured interviews, task abstraction
Research Questions:
- Does the project proposal properly outline the domain problem to be solved?
- What modalities were involved in conducting the semi-structured interviews and their lineage?
- How effective was the task characterization (translation of domain problems into abstract tasks)?
- Was task typology (Brehmer & Munzner) applied to its full potential?
- Did supplementation with analytic task taxonomy (Amar et al.) enable finer-grain task description?
Deliverables:
- 1.1-Project-Proposal-Transport-Example.md - Comprehensive domain problem outline
- 1.2-Interview-Log.md - Semi-structured interview documentation and modality analysis
- 1.3-Task-Abstraction-Log.md - Systematic task characterization using established frameworks
- 1.4-Task-Taxonomies.md - Integration of Brehmer & Munzner with Amar et al. taxonomies
Focus: Iterative sketching, data exploration, digital prototyping, feedback integration
Research Questions:
- How many iterations of data cleaning and mining occur throughout exploration?
- Which platforms generate exploratory visualizations and how do they differ?
- How do artifacts evolve after feedback sessions?
- How effective are sketches and storyboards in representing identified tasks?
- What evolution exists from initial to final digital sketches?
- Are standard drawing tools sufficient for static visualization representation?
Deliverables:
- 2.1-Hand-Drawn-Sketch-Log.md - Comprehensive sketching iteration documentation
- 2.2-Data-Cleaning-Log.md - Data relationship formulation and mining processes
- 2.3-Digital-Sketch-Figma-Notes.md - Platform comparison and digital design evolution
- 2.4-Digital-Prototype-VegaLite.json - Static prototype specification and feedback integration
Focus: Interactive implementation, technical requirements fulfillment, iterative refinement
Research Questions:
- How effectively are static digital sketches translated into web-based interactive visualizations?
- Does implementation include minimum requirements (2+ visual encoding techniques, proper color channel usage, brushing & linking, details-on-demand)?
- Is feedback throughout the process properly integrated?
- Are final visualizations exploration-friendly?
Deliverables:
- 3.1-Interactive-Viz-Transport/ - Complete interactive visualization implementation
- 3.2-Build-Log.md - Technical development process and requirement validation
Focus: Qualitative usability assessment, feedback integration, iterative refinement
Research Questions:
- Is accumulated feedback properly integrated into final artifacts?
- Are visualizations sufficiently exploration-friendly for end users?
- How is qualitative usability study performed and documented?
- How satisfactory are artifacts and final visualizations?
- Is additional iteration necessary based on evaluation outcomes?
Deliverables:
- 4.1-Usability-Test-Script.md - Systematic usability testing protocol
- 4.2-Usability-Test-Notes.md - Qualitative assessment results and iteration planning
Focus: Collaborative authoring analysis, methodology reflection, knowledge synthesis
Research Questions:
- What key elements emerge in collaborative authoring of visualization systems?
- How does the methodology perform across different domain contexts?
- What insights contribute to design study methodology advancement?
Deliverables:
- 5.1-Lessons-Learned.md - Comprehensive methodology reflection and insights
- 5.2-Key-Elements-of-Collaboration.md - Collaborative authoring framework analysis
- Week 1-2: Complete Abstract Phase documentation with rigorous domain analysis
- Week 3-4: Execute iterative design process with systematic feedback integration
- Week 5-6: Implement interactive visualizations meeting technical requirements
- Week 7-8: Conduct comprehensive usability evaluation with qualitative assessment
- Week 9: Synthesize collaborative authoring insights and methodology contributions
- Templates/: Utilize standardized research instruments for consistent documentation
- Training-Materials/: Reference complete case studies for methodology application examples
- QUICK-START-GUIDE.md: Consult simplified overview for rapid methodology comprehension
This methodology ensures:
- Systematic Documentation: Every design decision and iteration captured with research rigor
- Reproducible Process: Standardized framework enabling cross-study comparison and validation
- Collaborative Framework: Structured approach optimizing domain expert and visualization researcher interaction
- Research Contribution: Documented insights advancing design study methodology understanding
- Academic Rigor: Comprehensive research question framework supporting scholarly analysis
The framework addresses key research questions across all phases while maintaining flexibility for domain-specific adaptation. Each stage builds systematically on previous phases, ensuring comprehensive coverage of the design study lifecycle from initial problem characterization through final collaborative authoring analysis.
The following evaluation questions guide assessment and validation throughout the methodology:
- Does the project proposal properly outline the domain problem to be solved?
- What were the different modalities involved in conducting the semi-structured interviews and their lineage?
- How effective was the task characterization (translation of domain problems into abstract tasks)?
- Was the task typology (Brehmer and Munzner) applied to its total potential?
- Did they supplement it with the analytic task taxonomy (Amar et al.), enabling finer grain description of lower tasks?
- How many iterations of data cleaning and mining are done throughout the exploratory phase?
- Which platforms are used to generate exploratory visualizations, and how do they differ in usage?
- How do artifacts evolve after the feedback session?
- How effective are sketches and storyboards in representing tasks identified from the abstract phase?
- How much evolution can be seen from initial sketches to final digital sketches?
- Are standard drawing tools sufficient to properly create static versions of final visualizations?
- How effectively are static digital sketches built into web-based interactive visualizations?
- Does implementation include minimum requirements: 2+ visual encoding techniques, proper color channel usage, brushing & linking interactivity, details-on-demand interactivity?
- Is feedback throughout properly taken into account?
- Are final visualizations exploratory friendly?
- How is the qualitative usability study performed?
- How satisfactory are the artifacts and final visualizations?
- Do we need another final iteration (if necessary)?
- What key elements emerge in collaborative authoring of visualization systems?
Note: While some questions may appear redundant, they are relevant to specific stages of the design process and enable comprehensive methodology validation.