My Focus

I am highly interested in the machine learning field of reinforcement learning, both in new state of the art theories and applications ranging from smart homes to robotics. In general I am also interested in machine learning in general, especially the algorithms and the ideas behind them. This is mostly reflected in my choice of study and my theses. I intend to focus on research and development in these areas.

Feel free to look around!

Latest publications

  • 2019
  • Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control

    Guus Beckers, Joris Sijs, Jurriaan van Diggelen, Roelof JE van Dijk, Henri Bouma, Mathijs Lomme, Rutger Hommes, Fieke Hillerstrom, Jasper van der Waa, Anna van Velsen, Tommaso Mannucci, Jeroen Voogd, Wessel van Staal, Kim Veltman, Peter Wessels, Albert Huizing

    Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III,

    Intelligent robotic autonomous systems (unmanned aerial/ground/surface/underwater vehicles) are attractive for military application to relieve humans from tedious or dangerous tasks. These systems require awareness of the environment and their own performance to reach a mission goal. This awareness enables them to adapt their operations to handle unexpected changes in the environment and uncertainty in assessments. Components of the autonomous system cannot rely on perfect awareness or actuator execution, and mistakes of one component can affect the entire system. To obtain a robust system, a system-wide approach is needed and a realistic model of all aspects of the system and its environment. In this paper, we present our study on the design and development of a fully functional autonomous system, consisting of sensors, observation processing and behavior analysis, information database …

    url.
  • Pluggable Social Artificial Intelligence for Enabling Human-Agent Teaming

    J van Diggelen, JS Barnhoorn, MMM Peeters, W van Staal, M van Stolk, B van der Vecht, J van der Waa, JM Schraagen

    NATO HFM symposium on Human Autonomy Teaming,

    As intelligent systems are increasingly capable of performing their tasks without the need for continuous human input, direction, or supervision, new human-machine interaction concepts are needed. A promising approach to this end is human-agent teaming, which envisions a novel interaction form where humans and machines behave as equal team partners. This paper presents an overview of the current state of the art in human-agent teaming, including the analysis of human-agent teams on five dimensions; a framework describing important teaming functionalities; a technical architecture, called SAIL, supporting social human-agent teaming through the modular implementation of the human-agent teaming functionalities; a technical implementation of the architecture; and a proof-of-concept prototype created with the framework and architecture. We conclude this paper with a reflection on where we stand and a glance into the future showing the way forward.

    url.
  • 2018
  • BCI to potentially enhance streaming images to a VR headset by predicting head rotation

    Anne-Marie Brouwer, Jasper van der Waa, Hans Stokking

    Frontiers in Human Neuroscience,

    While numerous studies show that brain signals contain information about an individual’s current state that are potentially valuable for smoothing man-machine interfaces, this has not yet lead to the use of brain computer interfaces (BCI) in daily life. One of the main challenges is the common requirement of personal data that is correctly labelled concerning the state of interest in order to train a model, where this trained model is not guaranteed to generalize across time and context. Another challenge is the requirement to wear electrodes on the head. We here propose a BCI that can tackle these issues and may be a promising case for BCI research and application in everyday life. The BCI uses EEG signals to predict head rotation in order to improve images presented in a Virtual Reality (VR) headset. When presenting a 360° video to a headset, Field-of-View approaches only stream the content that is in the current field of view and leave out the rest. When the user rotates the head, other content parts need to be made available soon enough to go unnoticed by the user which is problematic given the available bandwidth. By predicting head rotation, the content parts adjacent to the currently viewed part could be retrieved in time for display when the rotation actually takes place. Eleven participants generated left- and rightward head rotations while head movements were recorded using the headsets motion sensing system and EEG sensors. We trained neural network models to distinguish EEG epochs preceding rightward, leftward and no rotation. Applying these models to streaming EEG data that was withheld from the training showed that 400 …

    url.
  • Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences

    Jasper van der Waa, Jurriaan van Diggelen, Karel van den Bosch, Mark Neerincx

    IJCAI-18 Workshop on Explainable AI (XAI),

    Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.

    url.
  • Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance

    Mark A Neerincx, Jasper van der Waa, Frank Kaptein, Jurriaan van Diggelen

    International Conference on Engineering Psychology and Cognitive Ergonomics,

    Most explainable AI (XAI) research projects focus on well-delineated topics, such as interpretability of machine learning outcomes, knowledge sharing in a multi-agent system or human trust in agent’s performance. For the development of explanations in human-agent teams, a more integrative approach is needed. This paper proposes a perceptual-cognitive explanation (PeCoX) framework for the development of explanations that address both the perceptual and cognitive foundations of an agent’s behavior, distinguishing between explanation generation, communication and reception. It is a generic framework (i.e., the core is domain-agnostic and the perceptual layer is model-agnostic), and being developed and tested in the domains of transport, health-care and defense. The perceptual level entails the provision of an Intuitive Confidence Measure and the identification of the “foil” in a contrastive …

    url.
  • Predicting head rotation using EEG to enhance streaming of images to a Virtual Reality headset

    Anne-Marie Brouwer, Jasper Van Der Waa, Hans Stokking

    2nd International Neuroergonomics Conference,

    Introduction Virtual Reality (VR) enables individuals to be virtually present in another location, for instance at a stadium where your favorite soccer team is playing a match. However, when presenting 360 streaming images to a VR headset, solutions are needed to deal with limited bandwidth. So-called Field-of-View approaches only stream the content that is in the current field of view and leave out the rest because of bandwidth limitations. When the user rotates the head, other content parts need to be made available soon enough to go unnoticed by the user. This problem can be partially solved at the cost of some bandwidth (resulting in eg a loss of spatial resolution) by not only streaming the current field of view, but also the directly surrounding content (‘guard bands’) so that the content can be retrieved in time for display when the head rotation actually takes place. If we could predict upcoming head rotations and …

    url.
  • SAIL: a social artificial intelligence layer for human-machine teaming

    Bob van der Vecht, Jurriaan van Diggelen, Marieke Peeters, Jonathan Barnhoorn, Jasper van der Waa

    International Conference on Practical Applications of Agents and Multi-Agent Systems,

    Human-machine teaming (HMT) is a promising paradigm to approach future situations in which humans and autonomous systems closely collaborate. This paper introduces SAIL, a design method and framework for the development of HMT-concepts. Starting point of SAIL is that an HMT can be developed in an iterative process in which an existing autonomous system is enhanced with social functions tailored to the specific context. The SAIL framework consists of a modular social layer between autonomous systems and human team members, in which all social capabilities can be implemented to enable teamwork. Within SAIL, HMT-modules are developed that construct these social capabilities. The modules are reusable in multiple domains. Next to introducing SAIL we demonstrate the method and framework using a proof of concept task, from which we conclude that the method is a promising …

    url.
  • The SAIL Framework for Implementing Human-Machine Teaming Concepts

    Bob van der Vecht, Jurriaan van Diggelen, Marieke Peeters, Wessel van Staal, Jasper van der Waa

    International Conference on Practical Applications of Agents and Multi-Agent Systems,

    Human-machine teaming (HMT) is a promising paradigm to approach situations in which humans and autonomous systems must closely collaborate. This paper describes SAIL, a software framework for implementing HMT-concepts. The approach of SAIL is to integrate existing autonomous systems in a framework, that serves as a social layer between autonomous systems and human team members. The social layer contains reusable modules to provide social capabilities enabling teamwork. The players and modules in the framework communicate via a human-readable communication language that has been developed for HMT concepts. We demonstrate the SAIL framework for a proof of concept task where a human operator is teaming up with a swarm of drones.

    url.
  • Contrastive Explanations with Local Foil Trees

    J van der Waa, M Robeer, J van Diggelen, M Brinkhuis, M Neerincx

    ICML-18 Workshop on Human Interpretability in Machine Learning (WHI'18),

    Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfeasible without restraining the set of important features. We propose to utilize the human tendency to ask questions like" Why this output (the fact) instead of that output (the foil)?" to reduce the number of features to those that play a main role in the asked contrast. Our proposed method utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact. In this study we illustrate this approach on three benchmark classification tasks.

    url.
  • The design and validation of an intuitive confidence measure

    Jasper van der Waa, Jurriaan van Diggelen, Mark Neerincx

    In Workshop On Explainable Smart Systems (EXSS),

    Explainable AI becomes increasingly important as the use of intelligent systems becomes more widespread in high-risk domains. In these domains it is important that the user knows to which degree the system’s decisions can be trusted. To facilitate this, we present the Intuitive Confidence Measure (ICM): A lazy learning meta-model that can predict how likely a given decision is correct. ICM is intended to be easy to understand which we validated in an experiment. We compared ICM with two different methods of computing confidence measures: The numerical output of the model and an actively learned metamodel. The validation was performed using a smart assistant for maritime professionals. Results show that ICM is easier to understand but that each user is unique in its desires for explanations. This user studies with domain experts shows what users need in their explanations and that personalization is crucial.

    url.
  • ICM: An Intuitive Model Independent and Accurate Certainty Measure for Machine Learning.

    Jasper van der Waa, Jurriaan van Diggelen, Mark A Neerincx, Stephan Raaijmakers

    ICAART '18,

    End-users of machine learning-based systems benefit from measures that quantify the trustworthiness of the underlying models. Measures like accuracy provide for a general sense of model performance, but offer no detailed information on specific model outputs. Probabilistic outputs, on the other hand, express such details, but they are not available for all types of machine learning, and can be heavily influenced by bias and lack of representative training data. Further, they are often difficult to understand for non-experts. This study proposes an intuitive certainty measure (ICM) that produces an accurate estimate of how certain a machine learning model is for a specific output, based on errors it made in the past. It is designed to be easily explainable to non-experts and to act in a predictable, reproducible way. ICM was tested on four synthetic tasks solved by support vector machines, and a real-world task solved by a deep neural network. Our results show that ICM is both more accurate and intuitive than related approaches. Moreover, ICM is neutral with respect to the chosen machine learning model, making it widely applicable.

    url.
  • 2017
  • An Intelligent Operator Support System for Dynamic Positioning

    Jurriaan van Diggelen, Hans van den Broek, Jan Maarten Schraagen, Jasper van der Waa

    International Conference on Applied Human Factors and Ergonomics,

    This paper proposes a human-centered approach to Dynamic Positioning systems which combines multiple technologies in an intelligent operator support system (IOSS). IOSS allows the operator to be roaming and do other tasks in quiet conditions. When conditions become more demanding, the IOSS calls the operator to return to his bridge position. In particular, attention is paid to human factors issues such as trust misalignment, and context-aware interfaces.

    url.
  • Human Robot Team Development: An Operational and Technical Perspective

    Jurriaan van Diggelen, Rosemarijn Looije, Jasper van der Waa, Mark Neerincx

    International Conference on Applied Human Factors and Ergonomics,

    Turning a robot into an effective team-player requires continuous adaptation during its lifecycle to human team-members, tasks, and the technological environment. This paper proposes a concept for human-robot team development over longer periods of time and discusses technological and operational implications. From an operational perspective, we discuss the types of adaptations to team behavior that are required in a military house search scenario. From a technological perspective, we explain how teamwork adaptations can be implemented using a teamwork module based on ontologies and policies. The approach is demonstrated in a virtual environment, in which humans and robots collaborate to find objects in a house search.

    url.
  • A feasible BCI in real life: Using predicted head rotation to improve HMD imaging

    Anne-Marie Brouwer, Jasper S van der Waa, Maarten A Hogervorst, Alessia Cacace, Hans Stokking

    Proceedings of the 2017 ACM Workshop on An Application-oriented Approach to BCI out of the laboratory,

    While brain signals potentially provide us with valuable information about a user, it is not straightforward to derive and use this information to smooth man-machine interaction in a real life setting. We here propose to predict head rotation on the basis of brain signals in order to improve images presented in a Head Mounted Display (HMD). Previous studies based on arm and leg movements suggest that this could be possible, and a pilot study showed promising results. From the perspective of the field of Brain-Computer Interfaces (BCI), this application provides a good case to put the field's achievements to the test and to further develop in the context of a real life application. The main reason for this is that within the proposed application, acquiring accurately labeled training data (whether and which head movement took place) and monitoring of the quality of the predictive model can happen on the fly. From the …

    url.
  • 2015
  • Interactive Reinformcement Learning; Two successful solutions for handling an abundance of positive feedback

    JS van der Waa

    Radboud University,

    The field of interactive reinforcement learning focuses on creating a learning method where users can teach an agent how to solve a task by providing feedback on the agent's behavior in an intuitive way. The goal of these agents is to find a behavior that maximizes the positive feedback it receives. As users provide positive feedback for almost every step towards the task's goal, the agent learns that some set of actions result in more positive feedback than others. This set becomes a positive circuit: a set of actions and situations for which the agent learned to expect relatively much positive feedback. The problem here is that the agent will exploit a positive circuit until corrected by the user, even though the circuit may not necessarily solve the task. In this study we propose two novel solutions to this positive circuits problem. Both solutions are new in that they focus on forcing the agent to explore more actions and situations instead of simply exploiting a found positive circuit. The first solution generalizes the feedback given for an action in some situation to situations similar to that one situation. If this feedback is positive, it will motivate the agent to perform this action again, even in unknown situations. The second solution uses a method to detect any repetitive behavior and a method to detect high-risk situations likely to elicit such undesired behavior. If one of these methods triggers, the agent is forced to perform the most recent, best assessed action. Both solutions were tested individually by comparing each to a baseline agent with none of the solutions implemented. Interaction between the two solutions was tested by combining them in one agent …

    url.
  • 2013
  • Connecting the Demons: How connection choices of a Horde implementation affect Demon prediction capabilities.

    JS van der Waa

    Radboud University,

    The reinforcement learning framework Horde, developed by Sutton et al. [9], is a network of Demons that processes sensorimotor data to general knowledge about the world. These Demons can be connected to each other and to data-streams from specific sensors. This paper will focus on how and if the capability of Demons to learn general knowledge is affected by different numbers of connections with both other Demons and sensors. Several experiments and tests where done and analyzed to map these effects and to provide insight in how these effects arose. Keywords: Artificial Intelligence, value function approximation, temporal difference learning, reinforcement learning, predictions, prediction error, pendulum environment, parallel processing, offpolicy learning, network connections, knowledge representation, Horde Architecture, GQ( ), general value functions

    url.
Complete list of publications