PORTFOLIO
DES303 JOURNAL

DES303 Week 5: Turning Tickers into a Running Prototype Through Arena Design, Focus Integrity, and Cross-Device Testing

Seung Beom YangDES303 – Design Research Practice
INTEGRATED REFLECTIVE CYCLE — STRUCTURE OF THIS POST
Experience
What did I make and test?
Reflection
What changed once it became real?
Theory
Why this direction?
Preparation
What next for crit?

Introduction

Week 5 was the point where Tickers started moving from a speculative concept into a more believable and testable system. In Week 4, I used low-fidelity wireframes to test the central tension of the project: whether a productivity-support system could feel helpful on the surface while also normalising monitoring, pressure, and judgement underneath. That early work was useful for clarifying the core idea, but it was still limited. The prototype had logic, but it did not yet have a convincing emotional tone, and it did not yet reflect the actual devices through which the system would need to operate.

This week, I wanted to continue prototyping by testing the project in a more realistic form. The most important part of Tickers is not only the interface, but the way the arena feature and the focus-integrity feature work together. The arena uses connection, commitment, nudging, and social pressure to make users more focused through accountability. Focus integrity then makes that accountability more serious by trying to verify whether the user is actually working. Together, these two parts create the strongest tension in the project: a system that helps people focus, but also makes them feel watched. The Week 5 material also stresses that this stage should involve both continued making and clear crit preparation, rather than treating the prototype as finished too early.

To do that, I moved from low-fidelity structure into visual precedent research, adopted a stronger design direction using CRED and NeoPOP as visual precedents, expanded the prototype from mobile-only into a cross-device system, and began implementing it through Flutter and Django. I also uploaded the app to TestFlight and tested it with around ten people, which exposed problems that would never have shown up in static mockups. By the end of the week, Tickers had become more coherent, more believable, and much harder to treat as a neutral productivity idea.

This week's experiment was not about finishing Tickers as a full app. It was about testing whether the concept could become believable as a cross-device speculative system. The core question was: can Tickers make focus feel like a social commitment rather than a private task? I tested this by connecting the mobile arena layer, the desktop focus-integrity tracker, and backend sync into one prototype. If the prototype worked, focus would no longer feel like something the user privately completes. It would become something declared, tracked, challenged, and socially judged.

WEEK 5 EXPERIMENT FRAME
  • What I was testing
    Whether the arena and focus-integrity layers could make productivity feel social, scored, verified, and uncomfortable.
  • What I was not testing yet
    Final algorithm accuracy, anti-cheat robustness, full privacy consent, real money, full security, or long-term behaviour change.
  • Success criteria
    The prototype would be useful if people could understand that Tickers is not only a timer, but a system where focus becomes public proof.

The Experience

From low-fidelity wireframes to a stronger visual direction

My starting point this week was the low-fidelity wireframe work from Week 4. Those wireframes helped me define the basic structure of Tickers, especially the relationship between focus tracking, social accountability, and behavioural scoring. They were useful for testing system logic but not for testing emotional reading. At that stage, I could ask whether the concept made sense, but I could not yet tell how persuasive, attractive, or unsettling it would feel once it started looking like a real app.

I was particularly drawn to the CRED app and the NeoPOP design system, which I used as visual precedents for a stronger design direction. I discuss that precedent work in more detail in the following section.

Precedent analysis and behavioural references

As the prototype moved beyond low-fidelity wireframes, I also looked more closely at existing apps and behavioural patterns that could help me understand how Tickers should feel and function. On the productivity side, I looked at apps such as Todoist and TickTick to understand how task systems reduce friction, structure goals, and make repeated use feel manageable rather than overwhelming. I also looked at Todomate, which was useful because it brought in a stronger sense of social accountability and emotional connection around task completion. These precedents helped me think more clearly about how a productivity system can be made usable, motivating, and habit-forming before it becomes overtly controlling. Alongside these, I used CRED and the NeoPOP design system as visual precedents. What interested me there was the bold contrast, strong hierarchy, reward-oriented polish, and game-like atmosphere. That direction felt especially relevant because Tickers is not just a task manager — it is a social and psychological system built around commitment, pressure, reward, and risk.

FIGURE 1 · VISUAL PRECEDENT REFERENCES — CRED (iOS)
CRED iOS screen used as a visual precedent — reward-oriented surface with strong hierarchy and high-contrast NeoPOP styling.
CRED iOS screen used as a visual precedent — polished status card showing behavioural progression as earned reward.
CRED iOS screen used as a visual precedent — game-like depth and edges applied to ordinary financial actions.
CRED iOS screen used as a visual precedent — high-finish layout that makes behavioural logics feel legitimate.
WHAT CRED / NEOPOP OFFERED THE PROJECT
  • Contrast & hierarchy
    High-contrast blocks and decisive typographic hierarchy make every action feel deliberate.
  • Reward-oriented surfaces
    Polished reward summaries and status cards make progress feel earned and collectible.
  • Game-like intensity
    Depth, edges, and state changes borrow from gaming — motivating without feeling childish.
  • Trust through finish
    High finish reads as seriousness, which is exactly what makes behavioural logics feel legitimate.
Figure 1. Reference captures from the CRED (iOS) app alongside the four qualities I took away — hierarchy, reward tone, game-like intensity, and trust through finish — that became the visual precedent for the Tickers redesign.

I also began thinking more intentionally about the behavioural logic behind the arena feature. The arena works by using connection and social visibility as a form of motivation. Instead of relying only on self-discipline, it introduces commitment, accountability, nudging, proof, and the risk of being challenged. Reading Thaler and Sunstein's Nudge alongside this helped me think more deliberately about how public commitment, proof, and the possibility of challenge could work together as behavioural pressure rather than as separate app features. That makes the system more engaging, but also more ethically uncomfortable, because the same mechanisms that help users focus can also make them feel watched or pressured. This behavioural framing became especially important once I began developing the focus integrity feature. Together, the arena and focus integrity started to feel less like separate functions and more like two parts of the same system: one uses social connection to increase focus, and the other verifies whether that focus is real. This precedent and psychology work therefore did not just inform the style of the interface. It directly shaped the structure of the prototype and the kind of tension I was testing.

FIGURE 2 · PRECEDENT ANALYSIS AND BEHAVIOURAL REFERENCES
SOURCE / REFERENCE
WHAT I TOOK FROM IT
WHAT CHANGED IN TICKERS
Todoist
Task clarity, low-friction productivity flow
Clearer task and bet setup
TickTick
Productivity structure, focus routine logic
Stronger session-based organisation
Todomate
Social accountability, connection around productivity
Stronger arena social layer
CRED
Polished, reward-driven desirability
More persuasive, product-like interface
NeoPOP
Bold contrast, game-like visual hierarchy
Stronger arena mood and pressure/reward feel
Behavioural / psychology
Commitment, nudging, visibility, pressure
Proof, bluff, accountability, focus-integrity tension
Figure 2. Precedent analysis and behavioural references — comparison board showing how competitor apps, visual precedents, and behavioural logic informed both the structure and tone of the Tickers prototype.

First mid-fidelity pass in NeoPOP

Before committing to a full arena flow, I did a quick first pass applying the NeoPOP direction to four screens I considered central to the system — the to-do page, the arena rooms list, the in-arena state, and the win screen. This was deliberately rough and mid-fidelity. The goal was to see whether the visual language held up as soon as it hit real productivity content, or whether it started feeling decorative before the system had any depth behind it.

FIGURE 3 · FIRST MID-FIDELITY NEOPOP PASS
Mid-fidelity to-do page applying NeoPOP contrast and hierarchy — testing whether the visual language holds up on ordinary productivity content.
To-do page
Mid-fidelity arena rooms list — first attempt at rendering room selection in a reward-oriented NeoPOP style.
Arena rooms
Mid-fidelity in-arena state — first attempt at translating the live session into NeoPOP visual language.
In arena
Mid-fidelity win screen — NeoPOP-styled outcome surface testing whether reward framing reads as earned and motivating.
Win screen
Figure 3. The first mid-fidelity pass applying NeoPOP to four key screens — a quick probe to see what the visual direction revealed and where it started to over-commit before a fuller arena flow was drawn.

That first pass taught me two things quickly. First, the reward tone carried the win screen cleanly — NeoPOP is genuinely good at making outcomes feel earned. The arena rooms list also held up once the hierarchy was given room to breathe. Second, the in-arena and to-do screens started to feel over-styled relative to the behavioural load they were carrying. I was stacking contrast and depth without asking whether each element served the focus-versus-surveillance reading I actually wanted. That told me to pull back, simplify hierarchy, and then commit to a more careful full flow rather than keep decorating isolated screens.

Expanding the prototype into the arena flow

With that first pass as a reference, I expanded the arena into a fuller prototype flow. Rather than showing only isolated interface moments, I designed and connected a wider set of screens that demonstrated how the system works as an experience over time. These included the room list, room lobby, bet-posting screen, live room state, proof submission, bluff challenge, and outcome / recap screen.

The room list and room lobby show how users are drawn into a social environment structured by quotas, seats, buy-ins, and rules. The bet-posting screen makes that more personal by forcing the user to formalise work into commitments with risk attached. The live room screen then shifts the prototype into a public space of visibility, where participants, pots, statuses, and updates make focus into something collectively monitored. The proof submission and bluff challenge screens reveal that the arena is not just playful motivation — it depends on evidence, scrutiny, and the possibility of being challenged by others. The outcome screen completes the cycle by translating survival, busts, and partial wins into a polished reward summary.

Designing this flow made the prototype much easier to understand. More importantly, it also made the system logic harder to avoid. Once the full arena sequence existed, the concept no longer relied on explanation alone. The user journey itself began to reveal how connection is being used as a productivity-enforcing tool.

This arena flow was testing whether connection can become pressure. A normal focus timer only asks the user to manage themselves. The arena makes other people part of that process. They can see the user's commitment, wait for proof, challenge suspicious behaviour, and respond to the outcome. This means the prototype is not only testing a game mechanic. It is testing whether social connection can become a productivity-enforcing system.

FIGURE 4 · ARENA WIREFRAMES — FULL FLOW
Arena room list — open sessions, quotas, buy-ins, and filters used to test how users are invited into the arena.
Room list
Room lobby — pre-session view with seats, rules, quotas, and timing used to test how commitment is framed.
Room lobby
Mission setup — task and wager flow used to test how productivity is turned into a formal commitment with stakes.
Mission / bet-posting
Live room state — participants, pot, progress, and event feed used to test social visibility and pressure.
Live room
Proof submission flow — testing how evidence and verification are made to feel normal inside the system.
Proof submission
Bluff challenge — peer contestation screen used to test how social policing operates inside the arena.
Bluff challenge
Session outcome — survival, busts, and payouts translated into a polished reward summary.
Outcome / recap
Figure 4. The arena wireframes — room list, lobby, mission setup, live room, proof submission, bluff challenge, and outcome — shown together so the flow reads as a connected social-accountability loop rather than isolated screens.

From mobile-only to a cross-device system

One of the biggest changes this week was architectural rather than purely visual. Earlier in the project, Tickers was mostly framed as a mobile experience because mobile was the fastest surface for wireframing and social flow design. As I developed the focus-integrity side more seriously, I realised that a phone-only prototype could not test the concept properly. If the user is actually doing work on a laptop or desktop, the phone cannot meaningfully verify what they are doing. That made the earlier architecture feel inadequate.

Mobile still made sense for the arena's social layer — browsing rooms, posting bets, checking updates, submitting proof, and seeing outcomes — but focus integrity needed to live closer to the actual work device. I therefore expanded the prototype so that the desktop, specifically macOS, became the place where focus integrity operates. This is where signals such as camera / face presence, active tab or window context, and task-related behaviour could be monitored more believably.

That change also forced a broader architecture shift. The earlier version of the prototype leaned toward a local or offline-biased structure. Once the arena became a live multi-user system, that no longer made sense. Bets, bluff calls, proof state, synced session outcomes, and live room updates all depend on shared state between users and devices. The project therefore shifted into a more hybrid synced architecture. I implemented the current version using Flutter on the interface side and Django on the backend side, with a push-and-sync structure so room states and session information could move across devices. The goal of the architecture at this stage was not to build the final product — it was to make the concept testable enough that the arena and focus-integrity layers could actually affect each other.

Moving to a desktop tracker changed the experiment. A phone-only prototype could only ask users to report that they focused. A desktop tracker tries to prove whether they focused. That shift made the project more interesting because proof requires evidence. Once evidence is involved, the project starts raising questions about accuracy, privacy, consent, and trust.

FIGURE 5 · BEFORE / AFTER — ONE FOCUS SESSION, TWO ERAS
BEFORE — EARLY PROTOTYPE
Actors: User · Phone · Local storage
  1. 1
    User → Phone: Tap “Start focus”
  2. 2
    Phone: Start timer
  3. 3
    Phone → User: Show countdown
  4. 4
    Phone: Timer ends
  5. 5
    Phone → Local storage: Save session
  6. 6
    Phone → User: “Done”
  7. — nothing between “tap start” and “save” verifies what the user actually does —

One device, one timer, no verification — the user's word was the whole system.

AFTER — RUNNING SYSTEM
Actors: User · Phone · Backend · Desktop · Other players
  1. 1
    User → Phone: Declare task · buy in · take seat
  2. 2
    Phone → Backend: Create session
  3. 3
    Backend → Others: FCM push — “room is live”
  4. 4
    User → Desktop: Start focus on work device
  5. LOOP · every few seconds
    • Desktop collects signals (window · face · input · screen diff)
    • if ambiguous: Desktop → Backend /judge verdict
    • if clear: Desktop decides locally
    • Desktop → Backend: integrity score + violations
    • Backend → Others: live state via FCM
  6. 5
    User → Phone: End session · submit proof
  7. 6
    Phone → Backend: Proof + session ref
  8. 7
    Others → Phone: Call bluff?
  9. 8
    Phone → Backend: Evaluate with integrity score
  10. 9
    Backend → Phone: Outcome · payout
  11. 10
    Backend → Others: Final recap via FCM

Declared intent on the phone, behavioural verification on the desktop, shared truth on the backend, live updates to other players — the user's word is just the start of the system.

Figure 5. How one focus session plays out before vs after the architecture shift. Before is short on purpose — the blankness is the argument. After is longer because every new actor is there to do something no other actor could: the phone can't verify itself, the desktop can't mediate bets, the backend can't see the user's face, and other players can't trust anything without shared state.
FIGURE 6 · WHAT TICKERS IS — SYSTEM OVERVIEW
MOBILE — SOCIAL LAYER
  • Arena rooms
  • Bets · proof · bluff
  • Outcomes
BACKEND — SHARED TRUTH
  • Room state
  • Session records
  • Payouts
DESKTOP — INTEGRITY LAYER
  • Focus session
  • Camera · window · task
  • Integrity score
Mobile ↔ Backend
bets, proofs, bluff calls
Backend → Mobile / Desktop
live sync of room + session state
Desktop → Backend
integrity signals + score
Figure 6. Tickers as a system — the mobile app carries the social layer, the macOS client carries the integrity layer, and the Django backend is the shared truth that keeps them in sync.
FIGURE 7 · DOMAIN MODEL — CLASS DIAGRAM

The data model clusters around two centres. The user side owns tasks, a wallet, device tokens, and focus sessions. The arena side is a room-centric graph: a Table hosts Seats, each seat declares Missions, and each mission can collect TaskConfirmations (proof) and BluffChallenges before a Payout resolves it. The bridge between the two sides is the optional foreign key FocusSession.arena_seat — that single edge is what allows a verified focus session on someone's desktop to count as progress inside a social room on their phone.

Figure 7. The Django domain model as a UML class diagram. Every relationship is a real foreign key in the backend. The single link labelled counts toward between FocusSession and Seat is the hinge between the integrity layer and the social layer.

Building and testing the prototype

After establishing the design and architecture direction, I started coding the system. I used Flutter for the front-end app and Django for the backend. On the focus-integrity side, I began implementing a score that uses multiple signals rather than a simple timer — camera / face presence, active tab or window context, and a lightweight AI judge layer. The AI judge is partly working at this stage. It can recognise some patterns but is still unstable and behaves inconsistently enough that I would only describe it as an emerging component rather than a resolved one. That uncertainty is actually useful, because it shows where the boundary currently sits between concept and system.

I also uploaded the app to TestFlight and tested it with around ten people. This changed the prototype significantly, because it stopped being something I could reason about alone. A few problems made this especially clear. Duplicate session records could appear because of lifecycle issues. Auto-end could behave unreliably while the app was backgrounded. Night mode could break mid-pomo when different subsystems interfered with each other. There were also moments when the macOS session state and the phone session state drifted apart, breaking the link between the focus-integrity layer and the social arena layer. Debugging was frustrating because every broken test session also cost my friends time and patience. The screens below are captured directly from the Flutter build — what the TestFlight users actually saw as they moved through login, to-do, calendar, and the two arena-creation flows.

These failures were useful because they showed that focus-integrity correctness is not only a technical issue. If the system creates duplicate sessions or loses sync between phone and desktop, users cannot trust the score, the proof, or the arena outcome. The experiment therefore revealed that trust is part of the design problem.

FIGURE 8 · CODED FLUTTER APP — SHIPPED TO TESTFLIGHT
Coded Flutter login screen — actual build shipped to TestFlight, entry point into the arena and focus-integrity system.
Login
Coded Flutter to-do screen — the productivity surface that feeds commitments into arena sessions.
To-do
Coded Flutter calendar screen — time-based view of upcoming and past sessions.
Calendar
Coded Flutter private-room creation flow — commitment setup framed as invitation rather than public bet.
Arena creation (private)
Figure 8. Screens captured from the actual Flutter build shipped to TestFlight — not mockups. Shows the live mobile surface of Tickers as it stood during testing: login, to-do, calendar, and the private arena-creation flow.
FIGURE 9 · FOCUS-INTEGRITY WIREFRAMES — HUD STATES & RECAP
Collapsed desktop HUD — small calm surface showing the current focus-integrity score without interrupting work.
HUD collapsed
Expanded HUD — live behavioural signals including typing pace, camera presence, screen context, and the current task.
HUD expanded
HUD warning state — falling focus-integrity score with a flagged camera signal and recovery countdown.
HUD warning
Mobile session summary — integrity, engagement, and confidence translated into a post-session recap.
Session summary
Figure 9. The focus-integrity side — collapsed HUD, expanded HUD, warning state, and the mobile session summary. Together they show how continuous signals on macOS are eventually translated into a recap that feeds back into the arena.

How focus integrity actually works

Focus integrity isn't a binary “are you working?” — it's a layered decision that escalates only when it has to. The desktop collects cheap behavioural signals every few seconds: which window is active, whether the user is typing, whether a face is in front of the camera, whether the screen has changed since the last tick. Most of the time these signals are clear enough to decide locally — a banned app is off-task, a task-keyword match is on-task, idle-plus-face-absent is AFK. When the signals are ambiguous, the client calls the backend's AI judge, which uses a lightweight text-based model for low-cost classification. Only if that judge itself asks for vision does the system capture a screenshot and escalate to the vision model. Every decision feeds an integrity score (0–100), a timeline, and a list of violation events, which together determine whether a session is valid, warned, or auto-stopped — and ultimately whether the arena trusts the proof the user submits.

FIGURE 10 · FOCUS INTEGRITY PIPELINE
STAGE 1 · DESKTOP — EVIDENCE COLLECTOR
Active window
bundle · title · dwell
Input activity
typing · clicks
Face presence
camera
Screen diff
since last tick
Violation flags
chrome_block · gaze_away
STAGE 2 · Local certainty gate
decides locally if the signals are clear — no network call
CLEAR SIGNAL
Decided locally
on-task · off-task · AFK
AMBIGUOUS
STAGE 3 · AI judge — text model
returns { on_task, confidence, need_vision, reason }
need_vision?
no
→ verdict
yes + screen changed
→ vision model + screenshot
Integrity score · 0–100
+ timeline · violations · engagement · confidence
Arena consequence
Valid
Session counts · proof trusted
Warning
On-screen warning shown
Broken
Auto-stop · proof invalidated
RULES THE JUDGE FOLLOWS
  • — Declared task is authoritative
  • — Banned apps (YouTube, Netflix, TikTok, games) are off-task regardless of other signals
  • — Idle + face-absent is AFK, treated as off-task
  • — Passive tasks allow low input activity
  • — Neutral apps with no task-keyword match are ambiguous — vision escalation happens here
Figure 10. The focus-integrity pipeline — a three-stage escalation from desktop evidence to local gate, then to AI text classification, and only if needed to vision. Most ticks never call the model at all; the escalation is cost-aware by design.
FIGURE 11 · LIVE FOCUS TRACKER — THE PIPELINE IN ACTION
Live focus tracker HUD running on macOS — showing score 38 (RECOVERING) with sub-scores I 46 · E 0 · C 77, a dropping score graph, and signal breakdown: SCREEN (Google Chrome neutral — 50), INPUT (Active 2s ago — 100), CAMERA (Face detected 100% — 100), CHROME (Clean session — 100), SYSTEM (Awake — 100), IMAGE (Motion 0.1% · static 5 — 100), AI (fallback unavailable 0% — 50). Live camera feed and low-score screen snapshot shown below.
Figure 11. The running focus tracker — a live screenshot of the macOS HUD during a coding session. The per-signal rows (SCREEN, INPUT, CAMERA, CHROME, SYSTEM, IMAGE, AI) are the same Stage 1 evidence channels from Figure 10, now rendered as a real UI with live scores. The state is RECOVERING because the active app is classified neutral and the AI judge is in fallback — the exact ambiguous case the pipeline is designed to escalate on.

At this stage, focus integrity should be understood as a confidence system, not as truth. The score estimates whether a session appears task-aligned based on available evidence. It can still misread context. This matters because if the interface presents the score as absolute, it becomes unfair very quickly. If it presents the score as uncertain, explainable, and contestable, then the system becomes more believable and more ethically interesting.

FIGURE 12 · UML SEQUENCE — ONE FOCUS SESSION, END-TO-END

Architecture diagrams freeze the system — they show what exists, not what it does. The sequence makes three things visible that matter for the concept. First, cost-aware escalation: most ticks are decided locally and never hit the AI; the judge is called only when signals are ambiguous, and vision only when the judge asks for it. Second, continuous vs punctual evidence: integrity is produced every few seconds across the whole session, but proof is a single moment at the end — the arena's job is to weigh the moment against the continuous record. Third, multi-actor causality: a single user action (tap end, submit proof) sets off parallel work across the backend, the AI, and other players' phones via FCM, and the diagram makes those parallel lanes legible.

Figure 12. UML sequence for one focus session end-to-end, with formal loop / alt / opt blocks and a closing note. This is the diagram a critic can point at to ask “where could this fail?” — the TestFlight failures in Figure 13 each map onto specific steps or blocks here (duplicate records near the PATCH integrity_score step, auto-end failing around the loop boundary, cross-device desync in the FCM branch).

What this made clear to me is that technical complexity is not just a build problem in Tickers — it is part of what makes the speculative condition believable, because the system only works when accountability is distributed across devices, backend logic, and other users.

FIGURE 13 · TESTFLIGHT — WHERE THE LIVE SYSTEM BROKE
FAILURE MODES SURFACED BY ~10 USERS
  • 01
    Duplicate session records
    Lifecycle issues produced duplicate session entries, making recaps and quotas unreliable.
  • 02
    Unreliable auto-end
    Sessions did not always end cleanly while the app was backgrounded on mobile.
  • 03
    Night mode breaking mid-pomo
    Subsystems interfering with each other caused theme state to fail mid-session.
  • 04
    Mobile–macOS state drift
    Session state between phone and desktop occasionally fell out of sync, breaking the link between social arena and focus-integrity layers.
Figure 13. None of these issues are glamorous, but together they taught me the most important technical lesson of the week — a live social system fails in ways that isolated screens do not, and those failures directly affect how believable the concept feels.

Reflection on Action

The most important thing I learned this week is that fidelity changes meaning. In Week 4, I was mainly testing whether the idea of Tickers made conceptual sense. In Week 5, I started testing what happens when that same idea becomes polished, functional, and believable. That shift changed the project significantly.

The first big change came from the visual direction. Using CRED and NeoPOP as precedents helped Tickers feel much more coherent and much more persuasive. The arena started to feel exciting, structured, and legible. That was useful because it made the prototype feel like something people might actually want to use. At the same time, that was also what made it more uncomfortable. The more attractive the interface became, the easier it was to imagine the system normalising itself through motivation, excitement, and reward. The design language is not neutral — it actively shapes whether the concept reads as supportive, seductive, manipulative, or critical.

The second big change came from architecture. Once I moved from a mobile-only idea to a cross-device setup, the focus-integrity system stopped being speculative in the abstract and became much more concrete. Verified focus was no longer just described; it became a running mechanic where signals could be read, judged, and turned into consequences for arena participation. The arena and focus-integrity features now reinforce each other in a way that feels much closer to the actual concept I had in mind: connection being used to motivate focus, and focus being validated through watching.

What was harder than expected was the cost of that realism. Once the system became live, time-based, and multi-user, testing became slow and frustrating. Debugging no longer only affected me — it affected the whole social loop of the app. I also noticed again that I tend to overbuild once the prototype starts working. This also reflects my own positionality as a designer, because once a concept starts making sense I naturally move towards building systems and infrastructure rather than staying with a smaller experimental probe. That helps me prototype quickly, but it also risks pushing the project towards technical completion before I have fully tested the most important uncertainty. There was a constant temptation to keep improving the full system rather than keep asking what single quality I still most need to learn about. The Week 5 guidance is useful here because it reminds me that an early prototype is not the full project and should not try to test everything at once.

This week also taught me something more specific about the concept itself. The arena works because it uses connection, commitment, and psychological nudging to make people focus harder. Focus integrity works because it turns that motivation into something monitored and verified. The project becomes strongest when those two layers are combined — and that is also where it becomes most ethically uncomfortable. The more effectively the app helps people focus, the more it also risks making surveillance feel normal.

FIGURE 14 · FIDELITY SHIFT
WEEK 4 — LOW FIDELITY
  • Tests whether the idea makes sense
  • Logic without emotional tone
  • Isolated screens, no live state
  • Concept readable by me, not by others
WEEK 5 — RUNNING PROTOTYPE
  • +Tests what polish does to the meaning
  • +Reward-oriented tone makes critique sharper
  • +Live multi-user sessions with synced state
  • +Concept readable through the journey itself
Figure 14. How fidelity changed the meaning of the prototype — the same concept reads differently once it is polished, synced, and playable by others.

Theory

This week's prototyping direction still fits strongly with the brief. The DES304 stream is asking for a plausible future condition shaped by emerging technologies, and for speculative artefacts or systems that make socio-technical change tangible enough to be questioned. Tickers is becoming stronger as a response to that brief precisely because it is no longer only an idea about productivity. It is becoming a more believable artefact that demonstrates how support, social connection, reward, and surveillance could merge inside one everyday system.

The Week 5 material also helped clarify why the prototype needed to develop in this direction. A strong prototype at this stage is not supposed to test everything. It should focus on what matters most now, what is most uncertain, and what can realistically be learned through making. For me, that uncertainty was no longer “does the concept exist?” It had shifted into “what happens when the system becomes visually attractive, socially functional, and technically believable?” That is why the move into precedent-driven design, cross-device architecture, and real testing made sense.

The visual precedent research also mattered conceptually, not just aesthetically. One of the key themes I identified in Week 4 was that harmful systems often become normal not through force, but through usefulness, care, and desirability. By moving Tickers into a CRED / NeoPOP-inspired direction, I was able to test that mechanism more directly. The more polished and reward-oriented the interface became, the more clearly I could see how the concept depends on attraction as much as enforcement. More broadly, the precedent analysis helped me see that Tickers needed to borrow not just the visual confidence of reward-driven apps, but also the behavioural structure of productivity and accountability systems already normalised in everyday use.

Finally, the architecture shift also has a theoretical side. Moving from a mostly local prototype into a synced cross-device system is not just an engineering decision. It reflects the kind of socio-technical environment the project is imagining. Systems like Tickers do not belong to one screen or one moment — they live through connected devices, background signals, shared states, and distributed judgement. That made the hybrid architecture more than a technical solution. It became part of the speculative condition itself.

Preparation

The next step is to bring this more developed version of Tickers into the Week 6 crit without presenting it as a resolved final product. The Week 5 material makes it clear that crit should not become pure show-and-tell — it should focus on what was tested, what was observed, what seems promising, and what should happen next.

For the crit, I plan to bring the progression from low-fidelity wireframes to visual precedents to the current higher-fidelity and coded prototype. I want to show the arena flow clearly enough that others can understand the system without needing every technical detail explained. During the crit, I will collect feedback through written notes organised into three categories: what feels strongest, what feels confusing or too heavy, and what should be simplified or tested next. After the session, I will use those notes to decide which part of the prototype should be refined first. I also want to include the cross-device architecture and the focus-integrity logic, because those now affect the meaning of the prototype in a major way.

FIGURE 15 · WEEK 6 CRIT — FEEDBACK I WANT
SIX TEST-BASED QUESTIONS FOR THE NEXT CRIT
  • Q01

    Does Tickers read as a speculative future culture, or does it still read too much like a normal productivity app?

  • Q02

    Does the arena make connection feel motivating, or does it make connection feel like pressure and control?

  • Q03

    Does the focus-integrity system feel believable enough to support the concept, even if it is not technically final?

  • Q04

    What privacy or security concerns appear immediately, especially around camera presence, screen evidence, and stored proof?

  • Q05

    Should the project move away from money-like stakes and towards proof, privacy, reputation, or trust?

  • Q06

    What should I simplify before the next experiment?

Figure 15. The six feedback questions I want the next crit to answer — shaped to sharpen the prototype rather than reopen the concept.

Being clear about what I am not testing yet

I also need to be explicit about the current limits of the prototype, so crit does not end up debating something I am not yet claiming. Being clear about these limits will help keep the feedback focused on the right questions instead of making it sound like I am claiming a finished product.

FIGURE 16 · WHAT THIS PROTOTYPE IS NOT TESTING YET
  • 01
    Final AI accuracy
    The AI judge is still an emerging component. It recognises some patterns but behaves inconsistently, so I treated it as a confidence signal rather than a resolved decision-maker.
  • 02
    Anti-cheat robustness
    I was not yet testing the system against adversarial users actively trying to game the integrity score, bluff challenge, or proof flow.
  • 03
    Privacy consent and stored proof
    Camera presence, screen evidence, and saved proof were used to make the concept testable, but the final consent model and data-handling rules were not solved yet.
  • 04
    Real money or full security
    The buy-in and payout logic was tested as a speculative pressure mechanic, not as a financial system with production-grade security.
  • 05
    Long-term behavioural effects
    I was not yet testing whether verified focus genuinely changes habits, wellbeing, or productivity over time.
  • 06
    Final architecture
    The current Flutter + Django + push-sync setup is a working prototype, not a consolidated production architecture.
Figure 16. The explicit boundaries of the current prototype — keeping the AI judge and the overall architecture framed as works in progress rather than resolved components.

Conclusion

Week 5 was important because it turned Tickers from a structured concept into a much more believable prototype. By moving from low-fidelity wireframes into a precedent-driven visual direction, then into a cross-device architecture and an early coded system, I was able to test not only how the app works, but how it feels. The most useful insight from this week is that the arena feature and the focus-integrity feature become much stronger when they are combined. The arena uses connection and nudging to increase focus. Focus integrity then makes that process more serious by verifying behaviour. Together they create a system that can genuinely help people concentrate, while also making them feel watched in the process.

That is the space the project now sits in most clearly. The more believable and effective Tickers becomes, the more it exposes the uncomfortable possibility that behavioural control can be accepted when it is wrapped in motivation, reward, and connection. My next step is to use crit to decide which parts of the current prototype are expressing that tension most clearly, and where the next iteration should become more focused.

FIGURE 17 · WEEKS 1–5 PROGRESSION
WEEK 1
Values and recurring themes
WEEK 2
Reframe the question
WEEK 3
Medium shapes message
WEEK 4
Scope brief, Experiment 1
WEEK 5
Running prototype, fidelity shift
Figure 17. The arc from personal values to a running cross-device prototype — Week 5 is where the concept started behaving like a real system.

References

  • Baldwin-Ramult, L. (2026). DES304: Emerging Technologies stream brief [Course brief, University of Auckland].
  • CRED. (2026). NeoPOP design system. https://cred.club/neopop
  • Design Research Practice. (2026). Week 5 blog guide [Course handout, University of Auckland].
  • Doist. (2026). Todoist [Mobile / desktop app]. https://todoist.com
  • TickTick Team. (2026). TickTick [Mobile / desktop app]. https://ticktick.com
  • Todomate. (2026). Todomate [Mobile app]. https://www.todomate.net
  • Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

Note on figures: Figures 2, 5, 6, 7, 10, and 12–17 were composed by the author as comparison boards, architecture diagrams, process maps, and reflection frames. Figure 1 shows third-party reference captures from the CRED (iOS) app used as visual precedents. Mid-fidelity and wireframe screens (Figures 3, 4, and 9) were created by the author for the Tickers prototype. Figure 8 shows screens captured from the author's Flutter build shipped to TestFlight. Figure 11 is a live screenshot of the author's macOS focus-tracker HUD while running.