CONFERENCES

Find here all the upcoming schedule of conferences for AI Convention Europe

08:30 - 9 am
Welcome coffee
Networking room
Welcome coffee
08:30 - 9 am
Come have a coffee with us !
09:00 - 9:30 am
Opening session - : "How do you teach AI the value of trust? Which cultural and technical attributes..." - EY (see more+)
"How do you teach AI the value of trust? Which cultural and technical attributes are practically required along the development lifecycle of AI systems"
09:00 - 9:30 am
Patrice Latinne - Ernst & Young
AI is not a single technology, but a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services and end-user adoption. AI should be pursued pragmatically. The transformational potential for robotic, intelligent and autonomous systems cannot be overstated, but an enterprise should still balance enthusiasm with pragmatism, taking a realistic view of the cost, time and risks involved in such programs to manage expectations. This presentation aims to introduce a few recent governance and technical practices and use cases that can help enterprises understand and better address the slate of new and expanded risks that may undermine trust, not only in AI systems but also in products, brands and reputations.
9:30 - 10:00 am
BUSINESS – FINANCE - ENTERPRISE
9:30 - 10:00 am
10:00 - 10:30 am
BUSINESS – FINANCE - ENTERPRISE
10:00 - 10:30 am
10:30 - 11:00 am
Coffee Break
Networking room
11:00 - 11:30 am
Keynote speaker - "Exploring Industry 4.0 towards Factories of the Future and beyond" - Paul Peeters - Agoria
"Exploring Industry 4.0 towards Factories of the Future and beyond"
11:00 - 11:30 am
Paul Peeters - Agoria
Manufacturing SMEs need to start their digital transformation journey right now. Paul Peeters will present the European Advanced Manufacturing Support Centre (ADMA). He works at Agoria’s Innovation Centre of Expertise, the entity coordinating ADMA, and he has been leading the development of a European methodology for transforming European manufacturing SME’s towards Factories of the Future. Learn how to benefit yourself from the EU Advanced Manufacturing Support Centre and how to adopt advanced manufacturing solutions and social innovation strategies to jointly contribute to a more competitive, modern and sustainable European industry.
11:30 - 12:00 am
BUSINESS – FINANCE - ENTERPRISE
11:30 - 12:00 am
12:00 - 12:30 pm
Your AI pipeline is too slow? Let's explore "Speed results" on Spark and TIMi! - Frank Vanden Berghen - TIMi
"Your AI pipeline is too slow? Let's explore "Speed results" on Spark and TIMi!"
12:00 - 12:30 pm
Frank Vanden Berghen - TIMi
The execution speed of data transformations is a crucial element in the everyday life of any 'data scientist'. Indeed, 'data scientists' are actually spending less than 20% of their time creating predictive models and doing real "AI Work". The remaining 80% is devoted to "simple" data management tasks: Research, unification and data cleaning (Source: IDG). This fact is also sometimes called the "80/20 data science dilemma" (google it!). Therefore, a fast and user-friendly solution for high-velocity data transformation is an important element for the comfort, the efficiency and the productivity of any scientist data working on AI. For example, if it were possible to reduce to only 8% these 80% of time devoted to data management, the productivity of your data scientists would be multiplied by a factor of three! This explains the relative success of Spark/Scala. Indeed, the unique key selling point of Spark (visible on their website) is: 'speed: run workload 100x faster'. "What's the real computational speed of your data management platform?" Answering this question is crucial when working on AI tasks. This is why, during this session we will explore the results of experiments conducted on the Hadoop-Spark platform and the TIMi platform. We are particularly interested in the computation time of these 2 platforms. Unexpectedly, there is only one universally recognized reference benchmark that measure the execution speed of data management solutions: the TPC-H. In addition, currently, the TPC-H is voluntarily limited to the analysis of databases only (for example, participants include: Oracle, MS-SQLServer, Teradata, etc.). Unfortunately, the 'exotic' data management solutions such as Spark or TIMi are outside the 'official' scope of the TPC-H. Nevertheless, it is very interesting to use the TPC-H workload to run experimental measures (to get computation times) because this workload is universally recognized as “interesting and representative” of the data transformations that are common in the everyday life of all data scientists. Outside of the workload of the TPC-H, we will also briefly review the results obtained during the 'Big Data Bench' (TPC-BB): This is another, more recent benchmark, with a broader scope than the TPC-H. The least that we can say is that the results of this scientific process of experimentation are really *very* surprising! In fact, I can’t wait to share these interesting results with the whole AI community! For the sake of transparency, we have published on Github all the testing scripts and all the results obtained. I invite you to discover these results with us during this short 30-minute session dedicated to AI and high-velocity data management.
12:30 - 1:30 pm
Lunch Break
Networking room
Lunch Break
12:30 - 1:30 pm
Networking room
1:30 - 2:00 pm
EDUCATION – POLITICS – SOCIETY
1:30 - 2:00 pm
2:00 - 2:30 pm
EDUCATION – POLITICS – SOCIETY
2:00 - 2:30 pm
2:30 - 3:00 pm
2:30 - 3:00 pm
3:00 pm
Coffee break and networking
Networking room
Coffee break and networking
3:00 pm
Come and have a drink with us !