2018 | Steve Whittaker

Steve Whittaker is Professor of Human Computer Interaction at University of California at Santa Cruz. He is a member of the CHI Academy, and Editor of the journal Human Computer Interaction. In 2014, he received a Lifetime Research Achievement Award from SIGCHI (Special Interest Group on Computer Human Interaction), he is also a Fellow of the Association for Computational Machinery. Probably best known for his work on email overload, computer mediated communication and personal information management, he combines empirical analyses that are informed by the social sciences, with novel interactive system design to address important human problems. He has worked both in industry and academia, at HPLabs, IBM, AT&T Bell Labs, and University of Sheffield. His H index is 65, and he has over 50 US and worldwide patents. He is currently working on personal informatics, and his latest book from MIT Press is The Science of Managing Our Digital Stuff.

His program :

  • Tuesday, july 10 2018 – 2:00pm LIMSI, conference room – Rue John von Neumann, Orsay – Plan Accès
    • Title : The Quantified Self and Personal Informatics: Critique and Opportunities
    • Abstract : The Quantified Self (QS) vision argues for collecting and analysing rich collections of personal data. According to the vision, such personal data facilitate greater self-insight promoting behaviour change. But although self quantification is important, quantified self technologies show poor rates of adoption. This talk explores reasons for this failure. I will revisit the QS vision, identifying its flaws, arguing that its overly analytic, overly rational and overly authoritative. I will present deployments of 3 new QS systems, that address these problems, proposing a new design approach to personal data systems.
  • Wednesday, july 11 2018 – 2:00pm LIMSI, conference room – Rue John von Neumann, Orsay – Plan Accès
    • Title : Designing Computational Systems to Support Emotion Regulation
    • Abstract : Social science and health research indicates that people lack insight into their emotions. They therefore find it difficult to regulate their mood which has significant impacts on mental and physical health. This talk describes two different classes of computational system that have been designed to help regulate mood. The first Emotical, both employs statistical modeling to help users diagnose their own moods, and combines this with a predictive personalized model that recommends novel behavioral strategies that improve mood. I will describe a successful deployment of this system in a one month user trial. The second system MoodAdaptor provides adaptive context specific reflections. Users who are in a negative mood are supplied with positive personal memories to reflect upon. Again I will describe data from a successful real-world deployment. I will also talk about current work which uses NLP and machine learning approaches to model mood, allowing for potentially more lightweight intervention methods. Finally I will summarize future research and practical challenges.
  • Monday, july 16 2018 – 2:30pmENSTA ParisTech, 1024, Boulevard des Maréchaux, 91762 Palaiseau Cedex – Plan Accès
    • Title: When Machines Know More About Us Than We Do Ourselves: What Causes Us To Act on Algorithmic Interpretations of Highly Personal Data? Steve Whittaker, Aaron Springer & Victoria Hollis – University of California at Santa Cruz
    • Abstract:Intelligent systems powered by machine learning are pervasive in our everyday lives. These systems make decisions ranging from the mundane to the significant, from routes to work to recommendations about criminal recidivism. We, as humans, increasingly devolve more and more responsibility to these systems with little transparency or oversight. Concerns about how these systems are making decisions are building and this is only exacerbated by recent machine learning methods such as deep learning that are difficult to explain in human comprehensible terms. The current paper uses empirical mixed methods to explore key processes underlying human comprehension of, and trust in algorithms. Specifically we evaluate when and why people are prepared to accept and act upon algorithmic outputs in the context of highly personal ‘quantified self’ data. Across 4 studies, we document instances of algorithmic authority where users are overly accepting of algorithmic interpretations of their own emotions, overruling their own views of how they feel. On other occasions, we find the opposite effect where users reject system interpretations, overriding quite accurate system interpretations when these disconfirm their views of themselves. We also present an intervention that aims to address these problems by making algorithms more transparent. We show that increased transparency may have paradoxical effects, and conclude with a discussion of design implications in this space.

Publication :

J.C Martin, Ch. Lescanff, S. Rosset, M? Walker, S. Whittaker, “How to Personalize Conversational Coaches for Stress Management?”, WellComp’18 Workshop, in conjkunction with UbiComp 2018, October 8, 2018. Singapore, Singapore. http://wellcomp.org


Seules les personnes avec un identifiant peuvent lire le rapport de visite.