Call for Participation

Advances in AI combined with sensors, actuators and embedded systems technologies has made it feasible to incorporate intelligence into software systems with the ability to control and adapt their behavior in real time. Designing AI systems, therefore, has become and will be a norm in the future. These systems are likely to be highly distributed across machine and network boundaries with the potential for any of their components to adapt in response to self-learning capabilities and contextual changes in their environments. Managing the complexity that comes with designing AI systems requires new insights, cognitive models and advanced software engineering techniques to handle a new class of requirements, ranging from data training, learning models, uncertainty, self-adaptability to safety and dependability. Due to their dynamic behaviors, it is also critical that these systems be rigorously tested for their functional correctness and cognitive capabilities as they continue to evolve and adapt to unforeseen environments.

The first Workshop on Dependability and Testability of AI Systems (DTAIS ’19) will take place on June 25, 2019 in Portland, OR (USA) as a part of the IEEE/IFIP International Conference on Dependable Systems and Networks. This workshop seeks to build bridges between researchers from different, yet complementary disciplines, to establish the foundations for testability and dependability of AI systems, and to develop a holistic and system thinking approach to handle software engineering, artificial intelligence, and cognitive capabilities of AI human-centric systems.