PORTFOLIO
BACK TO BLOG
DES300 — RESEARCH PROPOSAL

DES300 Research Proposal

Verified Effort — Speculating on AI-Mediated Accountability and Future Efficiency Culture

Seung Beom Yang
Stream: DES300 — Emerging Technologies · Project Partner: Empathic Computing Laboratory
University of Auckland — Te Whare Wānanga o Tāmaki Makaurau

1.0 Abstract

This research proposal responds to the DES304 Emerging Technologies stream, which asks how emerging technologies and speculative design might reimagine how people in Aotearoa connect, collaborate, and create. The project investigates a plausible future efficiency culture in which focus, effort, and contribution are no longer treated as private intentions or assumed qualities, but become visible, measurable, contestable, and socially accountable through Artificial Intelligence (AI), behavioural sensing, peer contestability, and commitment stakes.

The proposed design response is a speculative future productivity system, currently titled Tickers. Rather than functioning as a conventional productivity app, it operates as a provotype through which audiences can experience and question a future culture of verified effort. A provotype is used here as a provocative prototype: an artefact designed to expose assumptions and stimulate debate rather than validate a solution. In this system, users enter Proof Spaces, declare tasks, receive Focus Integrity scores, submit evidence of work, and allow peers or systems to contest whether their effort is valid. The project investigates the tension between support and control: while such a system may help people focus, collaborate, and stay accountable, it may equally normalise surveillance, pressure, shame, and behavioural control.

The research uses speculative design, critical design, provotyping, and practice-led design research as its methodological framework. It uses secondary and non-participative methods, including literature review, precedent analysis, technology review, scenario building, system mapping, interface prototyping, and ethical analysis. The intended contribution is not to prove that such a system improves productivity, but to make a possible future culture of verified effort visible, plausible, and debatable.

2.0 Keywords

Speculative DesignEmerging TechnologiesArtificial IntelligenceVerified EffortFuture Efficiency CultureBehavioural MonitoringAI-Mediated AccountabilitySocial AccountabilityProvotypingAotearoa Futures

3.0 Position Statement

My position as a designer is shaped by a strong interest in systems, productivity, behavioural design, and emerging technologies. I tend to move quickly from a problem into a technical workflow, interface, or prototype. This is a strength because it lets me build and test ideas at speed. It is also a risk, because I may over-trust systems as solutions before I have understood the social and ethical consequences they carry.

This research grew out of my own interest in focus, discipline, accountability, and productivity. However, this project is not simply about making people more productive. It is also a critical reflection on my own tendency to solve human problems through structure, measurement, and enforcement. The project asks whether a system that helps people focus can also become a system that watches, pressures, and controls them.

I am not anti-technology. I believe emerging technologies can genuinely help people organise their lives, collaborate more fairly, and act with greater intention. What concerns me is how useful systems become normal. A system does not need to feel dystopian to become harmful. It can be accepted because it is convenient, motivating, beautiful, supportive, or socially useful. This project therefore uses a speculative provotype, currently titled Tickers, to question what people may accept in exchange for efficiency, accountability, and support.

My DesignerPositionBuild at speedSystems thinkingBehavioural designOver-trust in structureSolving by enforcementMissing social costSTRENGTHSRISKS
Figure 1. The designer position behind this research — strengths paired with the risks the project must hold itself accountable to (author-created).

4.0 Introduction

The DES304 Emerging Technologies stream poses the question: “How might we use emerging technologies and speculative design to reimagine how people in Aotearoa connect, collaborate, and create?” The brief invites students to critically speculate about emerging technologies and their socio-cultural implications in a plausible future Aotearoa, and to produce speculative artefacts, experiences, systems, games, interactive prototypes, diegetic prototypes, and provotypes that provoke reflection on future socio-technical change (Baldwin-Ramult, 2026). The brief identifies three thematic lenses: connect, collaborate, and create.

This project responds primarily through the lenses of collaborate and create, with a secondary thread of connect. It focuses on future student work and collaboration in an AI-saturated environment. As Artificial Intelligence (AI) becomes increasingly able to assist with writing, coding, designing, planning, summarising, and generating creative outputs, final artefacts may become less reliable as evidence of human effort. In such a future, people may no longer only ask “What was produced?” They may also ask: “Who actually worked? Who genuinely contributed? How do we know?”

This proposal explores a future productivity culture in which focus, effort, and contribution become visible, measurable, and contestable. In this future, emerging technologies do not only support work. They also produce behavioural evidence of how work was done, who contributed, and whether effort can be trusted. To investigate this condition, the research develops a speculative provotype, currently titled Tickers, where future students enter Proof Spaces, declare tasks, receive Focus Integrity scores, submit evidence of work, and allow peer or system-based contestation.

The project's central concern is not whether productivity enforcement is simply good or bad. Instead, it asks how a system can feel helpful, motivating, and fair while also normalising behavioural monitoring and social control. This makes the proposed artefact a provotype: a designed system intended to provoke discussion, discomfort, and reflection rather than to solve productivity as a narrow usability problem.

5.0 Research Context and Background

This project is situated in a plausible near-future Aotearoa where AI, automation, behavioural sensing, and digital productivity systems have become embedded in everyday study and work. In this future, students and young workers use AI tools to generate or assist with essays, code, design concepts, project plans, summaries, and presentations. These tools may increase speed and output, but they make the relationship between final artefact and human effort less visible.

This creates a potential trust problem in collaborative work. In group projects, remote work, and AI-assisted learning environments, the final submission may not clearly reveal who genuinely contributed, who relied heavily on AI, who stayed engaged, or who simply appeared productive. As a result, future systems may shift attention from output to process. They may not only assess what was produced, but also how work was performed, whether focus was maintained, and whether contribution can be evidenced.

This proposal speculates on one version of this future: a culture in which productivity support becomes productivity enforcement. Focus is no longer private — it becomes scored. Contribution is no longer trusted — it becomes evidenced. Collaboration is no longer based only on goodwill — it becomes structured through peer visibility, challenge, and accountability.

The project builds from the insight that emerging technologies do not become powerful only because they are technically advanced. They become powerful when they become useful enough to feel normal. Earlier reflective research conducted as part of this project identified that harmful trade-offs are most often accepted not through force, but through usefulness, care, efficiency, and convenience. This insight shapes the proposed design direction: the most interesting future is not a dramatic dystopia, but an ordinary system that people willingly accept because it helps them succeed.

This context positions the research within a near-future Aotearoa rather than a distant science-fiction setting. The project therefore treats emerging technology as a cultural force, not only a technical tool. The focus is on how AI-assisted work, behavioural sensing, and social accountability could reshape trust, autonomy, privacy, and collaboration once they become ordinary parts of study and creative work.

Situating the project in Aotearoa also means recognising that trust, surveillance, educational access, and accountability are not experienced evenly. A future verified-effort system may affect students differently depending on culture, disability, socioeconomic position, language background, working responsibilities, and access to reliable technology. This does not mean the proposal can resolve all of these differences at this stage, but it does mean the project must treat “verified effort” as a social and ethical question rather than only a technical one.

6.0 Problem Definition

In future student collaboration, AI may make it easier to produce polished outputs but harder to judge who actually focused, contributed, or made effort. This creates a trust problem. If the final artefact no longer proves the process behind it, future systems may begin to measure and verify behaviour instead.

Such systems may support people who struggle with distraction, motivation, accountability, or fairness in group work. They may help users stay aligned with declared tasks, provide evidence of contribution, and make peer collaboration more transparent. They may also normalise surveillance, peer judgement, and behavioural control. A system that begins as support may become difficult to refuse once it is linked to trust, reputation, rewards, or access to collaboration.

The proposed design response sits inside this tension. It takes the form of a speculative commitment system in which users enter Proof Spaces, declare tasks, place commitment stakes, receive Focus Integrity scores, submit proof, and allow peers to contest their claims. The project asks what happens when efficiency becomes so culturally valued that friendship, collaboration, money, reputation, and AI judgement become tools for enforcing productivity.

The deeper problem is therefore not that students are distracted. It is that future systems may respond to distraction, AI-assisted output, and uncertain contribution by making work processes increasingly visible and measurable. This raises a critical design question: how might emerging technologies designed to support accountability also reshape autonomy, trust, privacy, and social relationships? The problem is therefore not framed as a lack of productivity, but as a future trust condition created by AI-assisted output and process-based verification.

7.0 Significance and Purpose

This research is significant because the cultural conditions it speculates on are already partly emerging. AI-generated work is now common in education and creative practice, raising questions about authorship, academic integrity, and learning process (UNESCO, 2023). Digital self-control tools already normalise the routine collection of behavioural data through blockers, timers, self-tracking systems, and commitment mechanisms (Lyngs et al., 2019). Commitment-based platforms such as StickK and Beeminder show how stakes can be framed as legitimate forms of motivation, while online proctoring demonstrates how algorithmic judgement of behaviour can be deployed at institutional scale (Coghlan et al., 2021; Selwyn et al., 2023). This research does not invent these patterns. It brings them together through speculative design to ask what a society might look like if they merged into one normalised system of verified effort.

The purpose of the research is therefore three-fold. First, to construct a plausible future scenario in Aotearoa that makes a verified-effort culture experienceable rather than abstract. Second, to develop a speculative provotype, currently titled Tickers, that performs this culture clearly enough for audiences to inhabit it, judge it, and disagree with it. Third, to use the design process as a critical research method that reveals tensions which would be difficult to surface through argument alone, particularly the tension between supportive interface aesthetics and controlling system logic.

The intended contribution to the field is twofold. To speculative and critical design practice, the research aims to contribute a worked example of how reward-oriented visual languages (such as those associated with CRED and NeoPOP) can be deliberately turned into instruments of provocation. To design ethics and emerging technology discourse in Aotearoa, the research aims to contribute a grounded case study that frames behavioural verification not as a distant possibility, but as a near-future cultural condition that designers in this region have an active role in shaping or refusing.

The audience for this proposal is therefore not only academic. It is also designers, educators, and students in Aotearoa who will increasingly need to make decisions about AI-mediated trust, accountability, and visibility in their own collaborative practice.

8.0 Research Aim, Objectives, and Questions

8.1 Research Aim

This research aims to investigate how emerging technologies could normalise a future efficiency culture in Aotearoa, where focus, effort, and contribution become visible, measurable, contestable, and socially accountable. Through the development of a speculative provotype, currently titled Tickers, the project explores how AI-mediated accountability systems may blur the boundary between productivity support and behavioural control.

The project explores how AI, behavioural sensing, peer contestability, and commitment-based accountability could reshape the way students in Aotearoa collaborate and understand effort. Through this speculative artefact, the research investigates the tension between supportive productivity systems and the possible normalisation of monitoring, pressure, and behavioural control.

8.2 Research Objectives

  • To investigate how AI may change trust in student effort, focus, and contribution.
  • To examine how productivity tools, behavioural monitoring systems, social accountability platforms, and commitment mechanisms already shape user behaviour.
  • To identify how useful support systems can become normalised forms of pressure, surveillance, or behavioural control.
  • To develop a speculative future scenario in Aotearoa in which focus and contribution are verified through AI, peer contestability, and commitment stakes.
  • To prototype a speculative future accountability system through interface screens, system maps, Proof Space flows, Focus Integrity feedback, and ethical risk mapping.
  • To critically evaluate the ethical implications of turning focus, effort, and contribution into behavioural evidence — where process becomes more important than output and trust moves from people to systems.
  • To identify how the proposed artefacts can provoke reflection and debate rather than present verified effort as a desirable solution.

8.3 Main Research Question

How might speculative design be used to provoke debate about a future Aotearoa where AI-mediated systems make student focus, effort, and contribution visible, measurable, contestable, and socially accountable?

8.4 Sub-Questions (How Might We / How Might Design)

Main HMW question:

How might a speculative future productivity system make the tension between support, surveillance, and accountability experienceable in student collaboration in Aotearoa?

Supporting HMW questions:

  • How might AI verification change trust between collaborators?
  • How might friendship and peer support become systems of accountability?
  • How might commitment stakes make motivation stronger but more ethically uncomfortable?
  • How might productivity support become behavioural control once it has become useful and ordinary?
  • How might a speculative interface make this future feel plausible rather than purely dystopian?

9.0 Research Scope

9.1 IN SCOPE
  • +Future student collaboration in Aotearoa
  • +AI-assisted work and creative production
  • +Productivity culture, behavioural monitoring, Focus Integrity scoring
  • +Peer contestability, Proof Spaces, commitment stakes
  • +Support versus control, speculative design, provotyping
  • +Interface and system prototyping, ethical analysis
  • +Speculative artefacts that materialise verified effort culture through interface, object, and institutional evidence
9.2 OUT OF SCOPE
  • Proving that the proposed system improves productivity long-term
  • Claiming AI can truly measure attention or motivation
  • Claiming financial stakes are safe or universally effective
  • Building a finished commercial product
  • Predicting that society will become like this
  • Primary research with participants at proposal stage
  • Full technical validation of AI attention detection or behavioural scoring accuracy

The future scenario is not a prediction; it is a plausible design context used to question emerging socio-technical patterns. For this DES300 proposal stage, the research is limited to secondary and non-participative methods, including literature review, precedent analysis, scenario building, system mapping, prototyping, and ethical analysis.

10.0 Literature Review

This literature review is organised thematically rather than chronologically. The selected sources frame the project's central tension between support and control in future productivity culture. The review brings together research on speculative design, AI-assisted authorship, digital self-control, behavioural nudging, gamification, social accountability, privacy, surveillance, and algorithmic judgement. Rather than treating these fields as separate, the review synthesises them to identify a design opportunity: a speculative system where productivity support, AI verification, peer accountability, and behavioural monitoring merge into a normalised culture of verified effort.

10.1 Speculative Design, Critical Design, and Provotypes

Speculative design provides a useful framework for this project because the proposed provotype is not intended to be judged only as a functional productivity product. Dunne and Raby (2013) argue that speculative design can create alternative futures that help people question present assumptions and debate possible social directions. Rather than solving a narrowly defined user need, speculative design can open a space for reflection on what should and should not be normalised.

This is important for the proposed provotype because the project does not ask, in any simple way, “How can productivity be improved?” It asks what kind of future culture is created when productivity becomes measurable, socially visible, and tied to consequences. In this sense, the artefact functions as a speculative work: it makes a possible future visible through interfaces, interaction flows, and system logic.

Design fiction and diegetic prototypes are also relevant. Bleecker (2009) describes design fiction as a way of materialising possible futures through artefacts that feel as if they belong to another world. Kirby (2010) uses the term “diegetic prototype” to describe fictional technologies that help audiences imagine future technological possibilities. The proposed artefact works in this tradition by creating a believable system from a plausible future Aotearoa in which AI-verified effort has become part of everyday student collaboration.

The concept of the provotype is also useful. Provotypes are designed to provoke reflection and expose hidden assumptions in existing practices (Boer & Donovan, 2012). The proposed artefact functions as a provotype because it deliberately sits between support and control. It should feel useful enough to be believable, but uncomfortable enough to raise questions. Its value lies in making a future efficiency culture experienceable, not in claiming that such a culture is desirable.

10.2 AI and the Changing Meaning of Effort, Authorship, and Contribution

Generative AI is changing how people produce written, visual, technical, and creative work. In education, AI tools raise questions about authorship, academic integrity, learning process, and how human contribution should be recognised (UNESCO, 2023). In work and creative practice, AI can assist with idea generation, drafting, coding, design exploration, and summarisation. These abilities make output faster and more polished but also make the human process behind the output harder to see.

This matters for collaboration. In a group project, the final artefact may not clearly show who generated ideas, who edited work, who used AI heavily, who only submitted final content, or who carried the process. When output becomes easier to produce, effort becomes harder to trust. The cultural focus may shift from assessing final artefacts to verifying process, contribution, and engagement.

This shift is central to the research problem. The project imagines a future in which people need to prove not only what they submitted, but how they worked. Focus Integrity, proof submission, and peer contestability become cultural responses to a world in which output alone is no longer enough. The project does not assume that AI can objectively measure effort. It treats AI verification as a fallible interpretation of behavioural evidence — important, because attention, effort, and creativity are not fully observable through digital signals.

10.3 Digital Self-Control, Productivity Tools, and Behavioural Support

Digital self-control tools already help users manage distraction, structure time, block websites, and reflect on device use. Lyngs et al. (2019) identify a broad design space of digital self-control tools, including blockers, timers, self-tracking systems, goal-setting tools, and commitment mechanisms. These systems generally respond to the gap between intention and behaviour: users know what they want to do but struggle to act consistently.

Pomodoro tools and focus apps structure time. App and website blockers restrict access. Digital wellbeing dashboards make usage visible. Task managers such as Todoist and TickTick help users organise commitments. However, many of these systems still rely on self-report or simple activity measures. They may know that a timer was completed or that an app was blocked, but not whether the user's work was meaningfully aligned with the declared task.

The proposed system speculates on a stronger shift: from productivity support to productivity enforcement. Instead of helping users plan or time their work, the system makes focus visible, scored, and consequential. This changes the cultural meaning of productivity tools. They no longer only support personal self-regulation; they become social systems that create evidence, accountability, and consequence.

Self-regulated learning literature also helps frame this issue. Zimmerman (2002) describes self-regulation as a cyclical process involving forethought, performance, and self-reflection. Digital productivity systems often support forethought and reflection, but the proposed provotype imagines a future in which the performance phase itself becomes monitored and scored. This raises the question of whether external enforcement strengthens self-regulation or weakens personal autonomy.

10.4 Behavioural Nudging, Commitment Devices, Gamification, and Stakes

The proposal also draws on behavioural economics and nudging. Thaler and Sunstein (2008) argue that choice architecture can shape behaviour without removing choice. Nudges may help people act in line with their own goals, but they raise questions about manipulation, autonomy, and who designs the choice environment.

Commitment devices are especially relevant. A commitment device helps a person bind their future behaviour by creating a cost for failure or a reward for follow-through. This logic appears in systems where users pledge money, make public commitments, or risk losing access or reputation if they fail. The proposed provotype translates this into commitment stakes: social, reputational, virtual, or financial consequences attached to declared intentions.

Gamification supports the design logic. Deterding et al. (2011) define gamification as the use of game design elements in non-game contexts. In the proposed system, game-like elements — points, rooms, stakes, challenges, rewards, outcomes — make productivity feel immediate and engaging. However, gamification can also turn meaningful behaviour into pressure, performance, and competition.

This creates one of the project's core tensions. Commitment stakes may help users overcome procrastination and distraction, but they may also make self-improvement feel coercive. If a user cannot focus because of stress, fatigue, disability, caregiving responsibility, or emotional difficulty, a stake-based system may punish the very people it claims to help. For this reason, the proposed artefact must not present commitment as automatically beneficial. It must ask when support becomes behavioural pressure.

10.5 Social Accountability, Peer Visibility, and Collaboration

The DES304 brief asks students to consider how emerging technologies may reimagine connection, collaboration, and creation. Tickers responds by examining how social connection may itself become a tool of accountability. In ordinary collaboration, trust depends on communication, shared expectations, visible effort, and mutual responsibility. Digital and AI-assisted collaboration can make contribution harder to observe: people work asynchronously, use different tools, rely on AI in different ways, or contribute invisibly through planning and thinking.

Social accountability can motivate people because behaviour changes when it becomes visible to others. In the proposed system, Proof Spaces make this a designed social condition. Users work alongside others, submit proof, and allow peers to contest whether their effort counts. This creates a culture in which friendship and collaboration are not only forms of support; they become mechanisms of verification.

This is powerful but ethically uncomfortable. Peer accountability can help people feel less alone and more committed. It can also create shame, pressure, comparison, and social policing. When friends become validators of productivity, the meaning of friendship shifts. When collaboration becomes evidence-based judgement, the meaning of trust shifts. This is why the proposed artefact is strongest as a speculative provotype: it does not celebrate social accountability, it asks what happens when connection becomes infrastructure for productivity enforcement.

10.6 Behavioural Monitoring, Surveillance, Privacy, and Algorithmic Judgement

The most ethically sensitive part of the proposed system is behavioural monitoring. The Focus Integrity layer interprets signals such as active window, input patterns, screen activity, camera presence, browser behaviour, and AI classification. This relates to wider debates about workplace monitoring, online proctoring, algorithmic scoring, and digital surveillance, including the broader pattern of quantified-worker logics extending into education and everyday life (Ajunwa, 2023).

Nissenbaum's (2010) theory of contextual integrity is useful here because it argues that privacy is not only about secrecy, but about appropriate information flows in specific contexts. A signal that feels acceptable in a private self-tracking tool may become invasive when shared with peers, teachers, employers, or payment systems. In the proposed system, this boundary is deliberately unstable: the same behavioural evidence that supports personal focus also affects social trust and consequences.

Workplace monitoring research shows that measurement affects stress, trust, and autonomy. Electronic performance monitoring may create visibility and accountability, but it can also increase pressure and reduce worker control (Ravid et al., 2020). Online proctoring debates show similar risks, especially when systems make automated judgements about behaviour and suspicion (Coghlan et al., 2021; Selwyn et al., 2023).

Algorithmic judgement adds a further layer. AI systems interpreting behaviour may produce false positives, false negatives, or biased judgements. A student reading a printed article may look idle. A neurodivergent student may work in non-standard patterns. A user with poor lighting or camera issues may be misread. Focus Integrity must therefore be treated as a confidence-based and contestable interpretation, not as objective truth.

Contestability is therefore central. If the system can score people, users must be able to challenge, clarify, or contextualise the score. This keeps the project from treating AI judgement as final authority and supports its speculative aim: the research asks what happens when a society begins to treat behavioural evidence as proof of effort, even when that evidence is incomplete.

10.7 Synthesis

Across the literature, a clear pattern emerges. Speculative design provides methods for making possible futures visible and debatable. AI literature raises questions about authorship, process, and contribution. Digital self-control research shows that users already accept technological support for managing distraction. Behavioural economics and gamification show how commitment, reward, loss, and challenge can shape action. Social accountability research highlights how visibility can strengthen collaboration but also create pressure. Surveillance and privacy literature shows the ethical risk of treating behaviour as evidence.

The gap is not the absence of productivity tools, monitoring systems, or AI accountability debates. The gap is that these patterns are often discussed separately. This research brings them together into one speculative future condition: a culture of verified effort, where focus and contribution must be evidenced rather than simply trusted. The proposed provotype is therefore not designed to solve productivity, but to make this possible future visible, plausible, and debatable.

11.0 Precedent Review

The precedent review examines existing apps, systems, and design patterns that already normalise parts of the future Tickers speculates on.

Task management tools such as Todoist and TickTick show how work can be broken into clear, repeatable commitments. They reduce friction and make personal organisation easier. The proposed provotype draws on the clarity of task declaration, but uses it critically to ask when planning becomes behavioural evidence in a wider accountability system.

Social productivity tools such as Todomate show how productivity can become socially visible. Seeing others complete tasks can create motivation, emotional connection, and routine. The Proof Space concept extends this pattern speculatively, turning social visibility into contestability and accountability.

Focus tools such as Forest, Freedom, Cold Turkey, Apple Screen Time, and digital wellbeing tools show how users already accept restrictions to protect attention. The proposed provotype draws on the logic of blocking and focus support, but questions what happens when restriction becomes tied to scoring, peer judgement, and stakes.

Commitment platforms such as Beeminder and StickK show how financial or consequence-based motivation can support behaviour change. The proposed system broadens this into commitment stakes, which may be financial, social, reputational, or virtual. The project questions when motivation becomes pressure.

Workplace monitoring and online proctoring systems show how behaviour can be transformed into evidence. These systems help frame the ethical risks of Focus Integrity. Tickers does not simply borrow their logic. It uses their logic speculatively to ask whether future collaboration could normalise similar forms of verification.

Visual precedents such as CRED and NeoPOP influence the proposed system's tone. Their bold, polished, reward-oriented style makes ordinary actions feel serious, exciting, and legitimate. This is important because the proposed artefact should not look obviously dystopian. Its danger lies in how attractive and useful it could feel.

Precedent typeExamplesExisting patternDesign implicationCritical question
Task managersTodoist, TickTickWork is broken into repeatable commitmentsTask declaration can become a form of behavioural evidenceWhen does planning become evidence?
Social productivityTodomateProductivity is made socially visiblePeer visibility can be repurposed as accountabilityWhen does friendship become pressure?
Focus blockersForest, Freedom, Cold TurkeyUsers accept restriction to protect attentionFocus support can become focus enforcementWhen does support become enforcement?
Commitment toolsBeeminder, StickKStakes are tied to behavioural follow-throughCommitment can become coercionWhen does motivation become coercion?
Monitoring systemsOnline proctoring, workplace trackingBehaviour is treated as evidenceTrust can be relocated from people to verification systemsWhen does trust require surveillance?
Reward aestheticsCRED, NeoPOPPolished visual systems make control feel desirableAesthetic polish can normalise behavioural enforcementWhen does polish make control feel normal?
Table 1. Precedent map showing how existing design patterns inform the proposed provotype and what critical questions each pattern raises (author-created).

The precedent review shows that the proposed future does not come from one single technology. It emerges from the combination of already familiar design patterns: task declaration, social visibility, restriction, stakes, behavioural evidence, and reward aesthetics. The design opportunity is therefore to combine these patterns into a speculative system that feels plausible because each part already has a recognisable precedent.

12.0 Research Gap and Design Opportunity

The research gap is not that no productivity app exists. The gap is that existing research and products separate productivity support, AI-assisted work, social accountability, commitment devices, and surveillance. This research combines these patterns into one speculative system, in order to ask what kind of culture may emerge when effort itself becomes visible and contestable.

The design opportunity is to make this future culture experienceable. The proposed provotype should feel useful enough to be plausible, but uncomfortable enough to provoke debate. It should not only present a new interface; it should present a new social condition — a future in which support, accountability, and control become difficult to separate.

This aligns with the DES304 brief because the project uses emerging technologies and speculative design to reimagine how people connect, collaborate, and create in Aotearoa, and responds directly to the brief's emphasis on creating speculative artefacts, systems, or experiences that provoke reflection on socio-cultural change. This gap creates a clear design opportunity for a provotype that does not predict the future, but makes one possible future condition available for critique.

13.0 Methodology, Methods, and Tools

13.1 Methodological Framework

This project uses speculative design, critical design, provotyping, and practice-led design research as its methodological framework.

Speculative design is appropriate because the project examines a possible future culture rather than only solving a present-day usability problem. Critical design is appropriate because the research questions the values embedded in productivity systems, AI-mediated accountability, and behavioural verification. Provotyping is appropriate because the artefact is intended to provoke reflection and debate. Practice-led design research is appropriate because the act of designing the interface, system map, scenario, and interaction flow generates knowledge about the research question.

Speculative design

Examines a possible future culture rather than only solving a present-day usability problem.

Critical design

Questions the values embedded in productivity systems instead of simply improving them.

Provotyping

The prototype is intended to provoke reflection and debate, not to validate a solution.

Practice-led research

Designing the interface, system map, and interaction flow itself generates knowledge about the question.

Figure 2. Methodological framework. The project sits within Frayling's (1993) research-through-design — the artefact is itself an argument (author-created, based on Frayling, 1993; Dunne & Raby, 2013; Boer & Donovan, 2012).

Within Frayling's (1993) categories, this project sits primarily within research-through-design: knowledge is produced by making, and the artefact is itself an argument. The paradigm is critical-interpretive rather than positivist; the goal is not to measure productivity but to surface contested values.

13.2 Methods

The project uses secondary and non-participative methods only. These include:

  • Literature review
  • Precedent analysis
  • Technology review
  • Speculative scenario building
  • Design fiction
  • System mapping
  • Journey mapping
  • Interface prototyping
  • Visual experimentation
  • Ethical analysis
  • Reflective practice
MethodWhat it examinesOutput for proposal
Literature reviewSpeculative design, AI authorship, behavioural monitoring, accountabilityTheoretical framework and research gap
Precedent analysisProductivity apps, focus tools, monitoring systems, commitment platformsDesign principles and ethical tensions
Technology reviewAI verification, behavioural sensing, screen activity, cross-device systemsPlausibility of the future scenario
Scenario buildingStudent collaboration in near-future Aotearoa2031 future context and user situation
System mappingRelationships between users, peers, AI, stakes, and evidenceLogic of the speculative system
Interface prototypingProof Space, Focus Integrity, contestability flowProvotype for debate
Ethical risk mappingPrivacy, autonomy, peer pressure, algorithmic judgementCritical reflection framework
Table 2. How each method generates output for the proposal — making explicit what data each activity produces and how it contributes to the research argument (author-created).
Literature Review
Theoretical framework
Precedent Review
Existing design patterns
Research Gap
Combined future condition
Future Scenario
Plausible 2031 Aotearoa
Three Concept Directions
Interface · Object · Institutional
Selected Provotype
Proof Space Interface
Ethical Risk Mapping
Critical reflection
Speculative Debate / Reflection
Research outcome
Figure 8. Research design flow showing how secondary research, precedent analysis, and speculative scenario building lead into the selected provotype and its ethical evaluation (author-created).

These methods are suitable because the project is not collecting participant data at this proposal stage. The aim is to build a well-grounded design inquiry that can be extended in DES304 with primary research and audience-facing testing.

13.2.1 Technology Review Focus

The technology review will examine the plausibility of the proposed system rather than attempt to validate its accuracy. It will consider AI-assisted authorship, behavioural sensing, screen and application activity, camera-based presence detection, cross-device synchronisation, and confidence-based scoring. These technologies are not treated as neutral solutions. Instead, they are reviewed as speculative materials that make a future verified-effort culture feel technically plausible while also exposing risks around misclassification, privacy, and algorithmic authority.

13.3 Tools and Materials

The project may use:

  • Figma interface prototypes and visual system experiments
  • Technical architecture diagrams showing how the speculative system could plausibly operate
  • Optional Flutter / Django prototype evidence as proof of technical feasibility
  • User journey and Proof Space flow maps
  • Future scenario writing
  • Focus Integrity HUD state mockups
  • Mobile session summary screens
  • Ethical risk maps
  • Visual precedent boards

The prototype should include both interface and system logic. It should not only show what users tap. It should show how the future culture operates.

13.4 Indicative Schedule

The DES300 phase covers proposal-stage work; DES304 carries the project through to a public-facing artefact and impact reflection. The indicative arc is:

Weeks 1–3
Brief scoping, reverse brief, exploratory research, themes.
Weeks 4–5
Three concept directions, low-fidelity prototype, fidelity shift, cross-device test.
Weeks 6–9
Literature and precedent depth; methodology consolidation; research design.
Weeks 10–12
Proposal write-up; visual proof of concept (system map, Proof Space flow, Focus Integrity HUD, ethical risk map).
DES304 S2
Refinement of the chosen concept, scenario depth, audience-facing speculative artefact, impact articulation.

14.0 Design Process and Proof of Concept

14.1 Brief Scoping and Reframing

The design process began with brief scoping. Rather than asking which existing product idea could fit the brief, the project shifted toward asking what kind of future condition should be made visible. This reframing was important because the DES304 brief is not asking for a polished product alone. It asks for speculative design that can provoke reflection on future socio-technical change in Aotearoa.

The key reframing was that “future” does not need to mean visually futuristic technology. A plausible future can be a technology or system that already partly exists, but has not yet become ordinary, socially normal, or culturally embedded in Aotearoa. This allowed Tickers to be framed not as a new gadget, but as a possible future culture.

14.2 Early Research Themes

Early research and reflection identified several themes that shaped the Tickers direction:

  • Surveillance framed as support
  • Behaviour shaping through optimisation
  • Helpful systems becoming hard to refuse
  • Support becoming control
  • Connection becoming accountability
  • Technology becoming normal by disappearing into everyday life

The most important insight was that harmful systems often normalise through usefulness rather than coercion. This directly informed the design decision to make Tickers feel polished, useful, and believable rather than obviously dystopian.

14.3 Three Concept Directions

From the research and early prototyping, three possible design responses were developed. Each concept explores the same future condition from a different design angle: interface, physical object, and institutional artefact.

CONCEPT 1 — CHOSEN
Proof Space Interface

A speculative app-based commitment system where future students enter shared Proof Spaces, declare tasks, place commitment stakes, receive Focus Integrity scores, submit proof, and allow peers to contest effort.

Social and behavioural layer of verified effort
CONCEPT 2
Ticker Cube

A physical desk object that makes focus status visible through light, expression, and ambient feedback. Digital accountability becomes embodied in the physical study environment.

Verification as visible presence on the desk
CONCEPT 3
Verified Effort Receipt

A future submission artefact attached to student group work. Displays declared contribution, AI-use disclosure, Focus Integrity summaries, peer contestability history, and proof traces.

Verified effort institutionalised in education
Figure 3. Three concept directions explored as different design responses to the same research problem — interface, physical object, and institutional artefact. Proof Space Interface was selected for its clearer expression of the support-versus-control tension (author-created).

Among these, the Proof Space Interface is currently the strongest direction because it contains the clearest interaction loop: declaration, monitoring, proof, contestability, and consequence. It also most directly exposes the project's central tension between support and control.

14.4 Selected Direction — Proof Space Interface

At this stage, the selected direction is the Proof Space Interface, a speculative provotype within the wider verified-effort system currently titled Tickers. The core experience involves a user entering a Proof Space, declaring a task, placing a commitment stake, working under Focus Integrity monitoring, submitting proof, and receiving an outcome based on AI verification and peer contestability. This system allows the project to explore how connection, collaboration, and creation may be transformed when trust depends on behavioural evidence, and to question how desirable interface design may make monitoring feel acceptable.

14.5 Future Scenario — A Day in 2031

Aotearoa, 2031. Mei is a third-year design student in Tāmaki Makaurau. Her group project is due in nine days. She opens Tickers on her phone and books a Proof Space with her two collaborators for 10 a.m. on Tuesday — ninety minutes, declared task: “draft user research synthesis for the second iteration.” She attaches a $15 commitment stake and a friendship stake — if she misses the session, her friend Aria gets a notification and a small payout from her account. The interface is calm, mostly black, with a single warm accent. It feels like getting ready for a workout.

When the session begins, the desktop HUD shows her Focus Integrity at 96. She writes for thirty-eight minutes. At minute thirty-nine, she opens a printed reading. The HUD score drops to 71 within four minutes; a small countdown appears. She taps “contextualise”, logs the offline reading, and the score climbs back. She finishes the session at 88. Her draft is submitted with an attached behavioural trace. Aria, who was in her own Proof Space, can see Mei's outcome but not her screen. Two days later, a peer in another group challenges the draft, claiming AI-assisted output. The challenge is resolved in twenty minutes through a contestability hearing built into the app. The friendship stake is released. Mei keeps the $15. She has a verified record.

What is interesting about this scenario is not the technology. The technology is plausible today. What is interesting is what has shifted in the culture. Mei trusts the score. Aria trusts the score. The peer who challenged Mei trusts that the challenge will be heard. Trust no longer flows between people; it flows through the platform. That is the future this provotype aims to make visible: not a dramatic dystopia, but a quietly redrawn relationship between effort, friendship, and proof.

14.6 Visual Proof of Concept

The proof of concept materialises this future through four interlinked artefacts. Each one demonstrates a different layer of the speculative system, and together they make the future culture experienceable rather than merely described.

14.6.1 System Map

The system map shows Tickers as a cross-device speculative architecture. The user layer (phone, desktop, peers, stakes) feeds into a verification layer (backend, AI judgement) which produces, over time, a cultural layer (verified-effort norms, friendship as oversight, polish as legitimacy). The cultural layer is the real research site: Tickers' design output is technical, but its argument is cultural.

USER LAYERPhoneDesktopPeersStakesVERIFICATION LAYERBackendAI JudgementCULTURAL LAYERVerified-effort normsFriendship as oversightPolish as legitimacy
Figure 4. System map for the proposed verified-effort provotype. The cultural layer is rendered in dashed grey because it cannot be coded — it is what each session quietly contributes to once enough people use the system (author-created).

14.6.2 Proof Space User Flow

The user flow traces a single Tickers session, from declared intention to social settlement. Steps 04 (live Focus Integrity), 06 (AI judgement), 07 (peer contestability), and 08 (stake settlement) are where the speculative tension concentrates. Step 10 — cultural residue — is where each session quietly accumulates into a future identity of verified effort.

01
Declare task
User sets intention
02
Set commitment stake
Money / friend / reputation
03
Enter Proof Space
Session begins
04
Live Focus Integrity
Behavioural scoring
TENSION
05
Submit proof
Evidence of effort
06
AI judgement
Confidence-based verdict
TENSION
07
Peer contestability
Challenge or affirm
TENSION
08
Stake settlement
Consequence applied
TENSION
09
Session summary
Receipt of effort
10
Cultural residue
What the session leaves behind
Figure 5. Proof Space user flow. The flow shows how declaration, monitoring, AI judgement, peer contestability, and stakes become one social system. Steps 04, 06, 07, and 08 are where the speculative tension concentrates; step 10 — cultural residue — is where each session quietly accumulates into a future identity of verified effort (author-created).

14.6.3 Focus Integrity HUD States

The Focus Integrity HUD has three primary states: aligned, warning, and contested. They share a single visual language so that the system feels consistent and legitimate in all three. The aligned state is the seductive state — calm, rewarding, motivating. The warning state reframes private moments (rest, offline reading, caregiving) as behavioural risk. The contested state turns peers into validators. The point of the design is that all three feel like one ordinary system.

ALIGNED
96
/ 100
Focus Integrity
Working on declared task
Calm. Rewarding. Motivating.
WARNING
71
/ 100
Drift detected
Re-align in 04:00
Reframes rest, offline reading, or caregiving as behavioural risk.
CONTESTED
/ 100
Peer challenge
Hearing in 20m
Peers become validators. Friendship turns into oversight.
Figure 6. Focus Integrity HUD across three speculative states, showing how the same visual language can frame support, warning, and contestability as ordinary system behaviour. The reward aesthetic is borrowed from CRED / NeoPOP and is doing critical work here, not stylistic work (author-created).

14.6.4 Ethical Risk Map

The ethical risk map plots Tickers' features along two axes: supportive ↔ controlling, and private ↔ socially visible. Features in the bottom-right quadrant — peer contestability, AI verdicts shared with peers, stake forfeit broadcasts, camera and screen evidence — are the most ethically dangerous. They are also the features that make the system feel important and legitimate. The design tension is structural, not accidental.

PRIVATE
SUPPORTIVE
PRIVATE SUPPORT
  • · Task declaration
  • · Focus timer
  • · Private Focus Integrity score
PRIVATE PRESSURE
  • · Financial stake
  • · AI behavioural scoring
  • · Camera / screen evidence
SOCIAL SUPPORT
  • · Shared session presence
  • · Co-presence
  • · Peer encouragement
SOCIAL ENFORCEMENTMOST DANGEROUS
  • · Peer contestability
  • · AI verdict shared with peers
  • · Stake forfeit broadcast
CONTROLLING
SOCIALLY VISIBLE
Figure 7. Ethical risk map showing where each feature sits between support / control and private / socially visible use. The bottom-right quadrant is the most ethically dangerous because it combines behavioural judgement with social consequence (author-created).

14.7 Proof of Concept — Working List

In addition to the four diagrams above, the DES304 phase will develop:

  • A short scenario film or storyboard set in 2031 Aotearoa
  • A Proof Space onboarding flow (Figma)
  • A task declaration screen and commitment stake setup screen
  • A live Focus Integrity overlay, shown through interface mockups and optional coded evidence of technical feasibility
  • A warning / recovery state with contextualise option
  • A session summary screen
  • A proof submission and peer contestability screen
  • A visual language pass that makes the system feel polished, high-stakes, and believable enough to expose how control may become desirable

The purpose is not to prove the system works as a finished product. It is to test whether the artefact communicates the future tension between support and control.

14.8 DES300 Proposal Deliverables

At the DES300 proposal stage, the project will submit the research proposal, literature and precedent review, research gap, methodology, three concept directions, future scenario, system map, Proof Space flow, Focus Integrity HUD states, and ethical risk map. These are not final product deliverables. They are proposal-stage artefacts used to show that the Capstone direction is theoretically grounded, feasible, ethically aware, and aligned with the DES304 Emerging Technologies brief.

15.0 Ethical Considerations

The main ethical issue is that the proposed provotype deliberately blurs support and control. The same features that may help users focus — AI verification, peer contestability, commitment stakes — may also create pressure, anxiety, surveillance, and social judgement.

Because the artefact is deliberately designed to feel useful and persuasive, there is also an ethical risk in the design communication itself. The project must avoid presenting behavioural verification as an unquestioned solution. It should make the system's seduction visible as part of the critique.

Privacy is a major concern. Focus Integrity may rely on behavioural evidence such as active window data, screen context, camera presence, input activity, or browser behaviour. Even if raw data is not stored, derived scores may still affect trust and consequences. The project must therefore consider data minimisation, consent, transparency, and appropriate information flows.

AI misjudgement is another concern. The system may incorrectly classify behaviour as distracted or aligned. It may misunderstand reading, thinking, planning, offline work, disability-related behaviour, or different working styles. For this reason, Focus Integrity must remain contestable and should never be framed as objective truth.

Peer contestability also introduces risk. It may make collaboration more transparent, but it may also create shame, social pressure, and conflict. A peer challenge system can easily become social policing. Commitment stakes may increase motivation, but they may also create stress or harm, especially if linked to financial or reputational consequences.

The project must also consider the risk of normalisation. If the proposed artefact looks too useful, polished, and fair, users may accept its deeper control logic without questioning it. This is not only a design risk; it is the core issue the project aims to expose. As a speculative artefact rather than a deployed product, the provotype handles this by surfacing its own seduction explicitly in the ethical risk map and in the framing of every artefact.

In the DES300 stage, this risk is managed by using secondary research, non-participative methods, speculative artefacts, and self-reflective analysis rather than collecting sensitive participant data.

Because this proposal stage uses only secondary, non-participative methods, no human-subjects ethics application is required at DES300. If primary research with audiences is conducted in DES304, an appropriate ethics review will be sought through the University of Auckland's Human Participants Ethics Committee, with attention to informed consent, withdrawal rights, data handling, and the emotional weight of being asked to evaluate a deliberately uncomfortable artefact.

16.0 Limitations

This research does not claim that the proposed system improves productivity. It does not claim that AI can accurately measure attention, motivation, effort, or contribution. Focus Integrity is treated as a speculative and contestable interpretation of behavioural evidence, not as objective truth.

The project is limited to secondary research, precedent analysis, speculative scenario building, interface prototyping, and ethical analysis. It does not include interviews or questionnaires at this proposal stage. The future scenario is plausible, but not predictive; it is used as a design context for critical reflection.

The project focuses mainly on student collaboration in Aotearoa. It does not attempt to represent all workplaces, all cultures, or all forms of creative practice. The proof of concept may demonstrate the system's logic, but it will not prove long-term behavioural effects, technical reliability, or social acceptability.

The proposal also has a cultural limitation. Although it is situated in Aotearoa, the current draft does not yet fully explore how different communities may experience accountability, surveillance, privacy, and educational trust differently. This will need to be handled carefully in later stages of the project.

The technical prototype should be understood as plausibility evidence rather than validation. A working HUD or backend flow may show that the system could exist, but it does not prove that the system is accurate, ethical, or socially acceptable.

Finally, the researcher's own positionality — a designer with strong sympathies for systems-thinking, productivity, and behavioural design — is itself a limitation. The proposal addresses this through the position statement and through the deliberate use of provotyping, which forces the design to argue against its own seductiveness.

17.0 Assumptions

This proposal is based on several assumptions:

  • AI-assisted work will continue to become more common in student and creative contexts.
  • Final outputs will become less reliable as evidence of human effort and contribution.
  • People may accept behavioural monitoring if it is framed as useful, fair, supportive, or motivating.
  • Peer accountability can support collaboration, but it can also create pressure and judgement.
  • Commitment stakes can strengthen motivation, but they may also create ethical risks.
  • A familiar app-like interface can make a speculative future feel more plausible than a clearly dystopian artefact.
  • The future culture explored in this project is not inevitable. It is one possible direction that speculative design can help question.
  • A speculative interface can provoke debate even without being technically complete, as long as the future condition it represents is coherent, plausible, and ethically legible.

18.0 Expected Impact

The intended impact of this project is to make a plausible future efficiency culture visible and debatable. By allowing audiences to inhabit a system in which effort is declared, scored, contested, and tied to stakes, Tickers aims to provoke questions about what kinds of support systems people may accept in the future.

The project does not seek to persuade audiences that productivity enforcement is good. It asks whether the desire for focus, fairness, and accountability could lead people to accept systems that reduce autonomy, privacy, and trust. It also asks how future collaboration might change if focus and contribution become things that must be proven rather than quietly trusted.

In this way, the proposed research responds to the DES304 brief by using emerging technologies and speculative design to reimagine how people in Aotearoa may connect, collaborate, and create. It aims to open a critical conversation about the future of verified effort, AI-mediated accountability, and the cultural cost of efficiency — a conversation that needs to happen now, while the underlying patterns are still negotiable.

References

Ajunwa, I. (2023). The quantified worker: Law and technology in the modern workplace. Cambridge University Press.

Baldwin-Ramult, L. (2026). DES304: Emerging Technologies stream brief [Course brief]. University of Auckland.

Bleecker, J. (2009). Design fiction: A short essay on design, science, fact and fiction. Near Future Laboratory.

Boer, L., & Donovan, J. (2012). Provotypes for participatory innovation. In Proceedings of the Designing Interactive Systems Conference (pp. 388–397). Association for Computing Machinery. https://doi.org/10.1145/2317956.2318014

Coghlan, S., Miller, T., & Paterson, J. (2021). Good proctor or “Big Brother”? Ethics of online exam supervision technologies. Philosophy & Technology, 34(4), 1581–1606. https://doi.org/10.1007/s13347-021-00476-1

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining “gamification”. In Proceedings of the 15th International Academic MindTrek Conference (pp. 9–15). Association for Computing Machinery. https://doi.org/10.1145/2181037.2181040

Dunne, A., & Raby, F. (2013). Speculative everything: Design, fiction, and social dreaming. MIT Press.

Frayling, C. (1993). Research in art and design. Royal College of Art Research Papers, 1(1), 1–5.

Kirby, D. (2010). The future is now: Diegetic prototypes and the role of popular films in generating real-world technological development. Social Studies of Science, 40(1), 41–70. https://doi.org/10.1177/0306312709338325

Lyngs, U., Lukoff, K., Slovak, P., Binns, R., Slack, A., Inzlicht, M., Van Kleek, M., & Shadbolt, N. (2019). Self-control in cyberspace: Applying dual systems theory to a review of digital self-control tools. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Paper No. 131). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300361

Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Ravid, D. M., Tomczak, D. L., White, J. C., & Behrend, T. S. (2020). EPM 20/20: A review, framework, and research agenda for electronic performance monitoring. Journal of Management, 46(1), 100–126. https://doi.org/10.1177/0149206319869435

Selwyn, N., O'Neill, C., Smith, G., Andrejevic, M., & Gu, X. (2023). A necessary evil? The rise of online exam proctoring in Australian universities. Media International Australia, 186(1), 149–164. https://doi.org/10.1177/1329878X211005862

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

UNESCO. (2023). Guidance for generative AI in education and research. United Nations Educational, Scientific and Cultural Organization.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

Note on precedent sources: Commercial products and visual precedents discussed in the precedent review, including Todoist, TickTick, Forest, Freedom, Cold Turkey, Beeminder, StickK, CRED, and NeoPOP, were reviewed as publicly available design precedents. Their official product websites and / or app store materials should be included in the final reference list where directly discussed.

Note on figures: All figures were composed by the author specifically for this proposal as visual proof of concept for the proposed verified-effort system. The reward-aesthetic precedents referenced (CRED / NeoPOP) are cited in the precedent review and not reproduced verbatim.

AI Use Acknowledgement

Generative AI tools were used to support brainstorming, structure refinement, and editing. All research framing, source selection, design decisions, and final writing were reviewed and revised by the author.