The concept of a smart home is no longer a vision of the future and although the ideals of computerised system seamlessly augmenting our lives have not yet been realised, smart devices such as thermostats, virtual assistants (e.g. Siri, Alexa) and self/assisted driving cars have transitioned to the domain of regular consumers.
Understanding how users perceive systems is critical if future systems are to become more transparent, efficient and effective. The HCI community has in some depth explored how users mental model system operation, however the majority of the literature has investigated procedural systems where a repeated action invokes an identical reaction. Smart devices provide new challenges, their agency permits variations in the reaction based on context and leant behaviour from user feedback. Understanding such systems requires a different kind of mental model, one which encapsulates not how a system works, but how it “thinks”.
More recently research has started to emerge which highlights how the mental models created by users can drastically differ from the reality. A lack of understanding about the capabilities of sensing technologies, compounded by the promises made of intelligent autonomous systems can lead to unrealistic expectations of a system. This gulf of execution (as coined by Don Norman) can have major implications for usability and more critically safety when considering safety critical systems such as automated vehicles.
My research centers around how feedback mechanisms can be utilised to inform the mental model of users without direct explanation, such that they can develop a more effective relationship with smart systems.
|Dr Enrico Costanza|
|Dr Sebastian Stein|
|Prof. Alex Rogers|