What is NDR?

NDR is an artificial intelligence conference. Well, technically, it’s an artificial intelligence, and machine learning, and deep learning, and data science conference. We don’t discriminate :).

NDR-113 is also the main character in Isaac Asimov’s beautiful sci-fi novel, The Positronic Man. The book tells of a robot that begins to display human characteristics, such as creativity, emotion, and self-awareness. We felt that naming our conference after him was an appropriate homage to the story.

NDR is something we think you’re going to love.

WHERE

Chopin Hall, Palas Ensemble
Iasi, Romania

WHEN
Thursday
June 7th, 2018

WHO’S BEHIND THIS

The good people at Strongbytes and Codecamp, naturally.

Shoot an email to [email protected] for any questions, remarks, or praise you may have, or like our Facebook page to get the latest updates.

You should also subscribe to our newsletter.

Presentations

Bringing Great Minds Together

Your chance to meet international experts and learn from their experience

We aim to bring together practitioners of data science, machine learning and deep learning. Filled with selected technical talks, our conference is designed to educate and inspire the audience, with experienced speakers discussing their own experiences, showcasing best practices, and business use cases.


An Energising Experience

1 day, 1 track, 11 sessions

At the end of the day you’ll come out excited and exhausted, wanting more. You will get a better understanding of how to build intelligent applications, and see how companies are using intelligent techniques in production. You will find out about new tools and techniques, how you can improve your workflow, or how you can start your data science career.


See you there!

NDR is coming up in

Our Speakers

Conference Schedule

  • June 04 - NDR Iasi


  • June 06 - NDR Bucharest


  • NDR June 07 2018 Iasi


  • 09:45 - 10:15
    Artificial intelligence is projected to add over €13 trillion to the global economy by 2030 but companies outside of the US and China risk being shut out by the ‘data wall’ dominated by internet giants such as Google and Baidu. Bonseyes is an industry-driven collaborative platform which aims to get companies over the wall using the new power of edge computing and leveraging Europe’s leadership in embedded systems and AI. It enables multi-party open development of systems of artificial intelligence which are emerging as a key growth driver in Smart CPS systems in the next decade. This talk will give an overview of current trends of AI together with the key drivers towards edge intelligence. It will present the cores concepts of the AI Marketplace project as well as industry use cases within Automotive, Healthcare and Consumer applications.

  • 10:20 - 10:50
    Communication is a crucial part of our jobs. Data visualization plays an important role in such communication. Despite much scientific research, data visualization is perceived as a combination of technical and artistic skills. In this lecture, you will learn why this is the wrong way to think of data visualization. You will also learn about the biggest visualization anti-patterns that I have been able to identify during more than 15 years of my professional career. Finally, I will present a methodological approach that fixes most of the problems and guides the practitioner towards an effective visual representation of a data-intensive idea.

  • 11:05 - 11:35
    Natural language is the most efficient way humans can communicate but, it does not make sense to machines. NLP/NLU have been specifically developed to narrow this gap. You will learn the basics of NLP/NLU, looking at a number of available solutions with some hands on examples. Cloud vs on premises, benefits vs drawbacks, cost, flexibility and customisations. We will go through a number of real life examples of where NLU/NLP is a game changer by bringing automation and efficiency to the existing or new processes.

  • 11:40 - 12:10
    With machine learning becoming more and more an engineering problem the need to track, work together and easily deploy ML experiments with integrated CI/CD tooling is becoming more relevant than ever. In this session we take a deep-dive into Azure Machine Learning service, a cloud service that you can use to track as you build, train, deploy, and manage models. We zoom into the building blocks provided and show, through some demos, how to use them. At the end of this session you have a good grasp of the technological building blocks of Azure machine learning services. Just waiting to be used in your own projects afterwards.

  • 12:25 - 12:55
    As the number of ML/DL applications is growing in an exponential manner, assessing the risk of AI models became more and more important. In my talk I would like to cover the differences between different kind of uncertainties, and how to assess them. We will briefly go through different approaches for uncertainty estimation like evidence analysis or Bayesian Deep Learning and try to understand their strengths and weaknesses.

  • 14:00 - 14:30
    Artificial Intelligence, of which machine learning is a core part, is just a more advanced form of consulting that aims at producing ad hoc, straight-to-the-point results for customers. More than out-of-the-box (cloud) services, AI is about flexible frameworks capable of processing data in repetitive, configurabile and even agile way. ML.NET is a machine learning framework specifically built for .NET developers. It allows using .NET as the underlying framework and C# and F# programming languages to add forms of machine learning into your applications. The most interesting aspect of ML.NET is that it just offers a programming platform arranged around the idea of predefined "learning tasks". The library comes with sentiment analysis, fraud detection, price prediction, and will be extended to image analysis and whatever else will become of broad interest. The session provides a primer for all those blurred by the actual meaning of AI.

  • 14:35 - 15:05
    Automated machine learning is the process of automating some or all of the phases in a machine learning pipeline, such as data pre-processing, feature selection, algorithm selection, and hyper-parameter optimization. One advantage of these techniques is the empowerment of users, users that may or may not have data science expertise, allowing them to identify machine learning pipelines for their problems so that they achieve a high level of accuracy while at the same time minimizing the time spent on these problems. During this presentation Vlad Iliescu offers a high-level look of some of the available tools for automating machine learning, such as Google's AutoML, Microsoft's Automated ML, Auto-Keras, and auto-sklearn, some of their advantages and disadvantages. Vlad then goes into more depth on Automated ML, showing how it can be used effectively to automatically train models based on various datasets and evaluation metrics, and also how it stacks against other, manually selected algorithms and hyper-parameter combinations. Automated ML's dependence on the Azure cloud will also be discussed, as well as the available options for deploying the trained models and the privacy implications of using this tool.

  • 15:20 - 15:50
    Artificial Intelligence, Machine learning or deep learning are buzzwords but also are reality now, however client’s trust is difficult to acquire. Create projects that make sense to the client is the most difficult part. Questions like- “What client wants? What client needs? What clients want to pay? What is important to the client?” - Are the key to success! Manage AI projects depends on how can we manage client expectations.

  • 15:55 - 16:25
    Different kind of information available, different approaches and techniques for diversity of challenges. Perspectives and opportunities for AI in Energy Systems.

  • 16:40 - 17:10
    Beyond traditional analytics use cases, plenty of businesses are exploring how to build AI and Machine Learning capabilities into larger 'cognitive' systems - but efforts often stall at the proof of concept stage. It can be very challenging to productionise complex applications of this kind, construct a mixed team of data scientists and software developers to continuously deliver and run them, and to embed all this into existing business workflows and operations. In this talk we'll look at what it takes to achieve this through specialised approaches to analysis, design, QA and CI/CD for these integrated ML solutions, and examine what is needed from a DevOps operating model for Cognitive Computing based on experience in the field.

  • 10:15 - 10:45
    Automated machine learning is the process of automating some or all of the phases in a machine learning pipeline, such as data pre-processing, feature selection, algorithm selection, and hyper-parameter optimization. One advantage of these techniques is the empowerment of users, users that may or may not have data science expertise, allowing them to identify machine learning pipelines for their problems so that they achieve a high level of accuracy while at the same time minimizing the time spent on these problems. During this presentation Vlad Iliescu offers a high-level look of some of the available tools for automating machine learning, such as Google's AutoML, Microsoft's Automated ML, Auto-Keras, and auto-sklearn, some of their advantages and disadvantages. Vlad then goes into more depth on Automated ML, showing how it can be used effectively to automatically train models based on various datasets and evaluation metrics, and also how it stacks against other, manually selected algorithms and hyper-parameter combinations. Automated ML's dependence on the Azure cloud will also be discussed, as well as the available options for deploying the trained models and the privacy implications of using this tool.

  • 10:50 - 11:20
    We are going to discuss most common scenarios if you are going to move your ML workloads to Cloud (and why you might be doing this). We’ll talk about organizing the training in the most effective way, and we’ll briefly discuss the difference between different types of accelerators. Afterwards we are going to discuss how Tensorflow deals with very large datasets or complicated models with long training time. Why would you like to move your training workloads to Cloud: common scenarios Rent a single VM Using tensorflow estimators API. Managing costs: training with preemptibles. Accelerators: CPUs vs GPUs vs TPUs Intro to distributed training: data parallelism vs. model parallelism. How tensorflow handles large datasets Running Tensorflow on GCP: Google Cloud ML Engine vs. Kubeflow

  • 11:35 - 12:05
    Imagine if you could predict equipment failures before they happen and systematically prevent them. Identify warning signs of potential problems and preemptively service equipment before problems occur. That's what predictive maintenance promises. But as with any other problem that's worth solving, things aren't quite as straightforward, and there are many potential issues which could derail your project. Join this session to understand how to pragmatically approach your next predictive maintenance effort.

  • 12:10 - 12:40
    When it comes to committing insurance fraud, imagination of people is limitless. However, the time that insurance employees have to analyze fraud cases is not. How would you make a machine understand if the person is saying the truth about the car accident? This talk will be a story on how we used Computer Vision to automatically detect the dynamics of car accidents, as well as different images which show the same damage. We will guide you through the journey of application of techniques such as Multi-labelling for recognizing presence of different car parts in the image, Images clustering according to the content of the image, Damage Detection with Object Detection techniques and Image similarity. The goal is to understand how to combine different Computer Vision techniques and different types of Neural Networks in order to come up with the best solution for identifying fraud through analyses of images.

  • 12:55 - 13:25
    If you are in the business of providing insights to investors to make more informed decisions leading to just and sustainable global economy, much of your daily job involves researching tens of thousands companies worldwide for their sustainability performance. This is an open source intelligence task, which consists of monitoring, detecting, extracting, processing, and analyzing an exponentially increasing volume of publicly available data and information. The big challenge is simple: the intelligence needs to be both comprehensive (you cannot miss any relevant story and you need to do general trend analyses) and precise (you cannot afford noise on the top of already voluminous data and information). In short: you need to see both the whole informational forest, and each and every relevant tree – story or narrative – within this forest. The Sustainalytics team will present how the solution that we have implemented has met the challenge. By utilizing NLP and NLU techniques and technologies, we have created a global open source intelligence system that monitors tens of thousands of news sources for an equally large number of entities. Our system identifies relevant texts, processes and groups them, enriches texts with domain-specific metadata, then outputs about 0.004% of all texts that ever enter the system. The system has achieved a three-fold increase in both recall and precision compared to comparable solutions. We will discuss how a combination of computational linguistics, classical machine learning, and deep learning have helped to almost fully automate categorization of news and social media stories while solving the informational relevancy problem.

  • 14:30 - 15:00
    A neural network model, no matter how deep or complex, implements a function, a mapping from inputs to outputs. The function implemented by a network is determined by the weights the network uses. So, training a network (learning the function the network should implement) on data involves searching for the set of weights that best enable the network to model the patterns in the data. The most commonly used algorithm for learning patterns from data is the gradient descent algorithm. By itself the gradient descent algorithm can be used to train a single neuron, however it cannot be used to train a deep network with multiple hidden layers. Training a deep neural network, involves using both the gradient descent algorithm and the backpropagation algorithm in tandem. These algorithms are at the core of deep learning and understanding how they work is, possibly, the most direct way of understanding the potential and limitations of the deep learning. This talk provides a gentle but still comprehensive introduction to these two important algorithms. I will also explain how the problem of vanishing gradients arises, and how this problem has been addressed in deep learning. Talk type: Deep-Learning Beginner-Intermediate Talk

  • 15:05 - 15:35
    Natural language is the most efficient way humans can communicate but, it does not make sense to machines. NLP/NLU have been specifically developed to narrow this gap. You will learn the basics of NLP/NLU, looking at a number of available solutions with some hands on examples. Cloud vs on premises, benefits vs drawbacks, cost, flexibility and customisations. We will go through a number of real life examples of where NLU/NLP is a game changer by bringing automation and efficiency to the existing or new processes.

  • 15:50 - 16:20
    For decades, the quest to build the ultimate model that helps understanding the hearts and minds of customers has been on the top of the agenda for both business decision makers and data scientists. The exponential growth of the quantities of data available as well as the ever-increasing sophistication of customers made this model a moving target and a rather difficult one to achieve. Finally, technology has caught up and we’re ready to move into the era of the Customer Model – a complex, powerful, and comprehensive approach that promises to open a new chapter in the way we model customer behavior. Starting from hundreds or thousands of features capturing a wide range of aspects related to this behavior, advanced deep learning models can be trained to encode and measure it in a highly efficient way. Easy and cost-effective access to thousands or tens of thousands of GPU cores through services like Azure Machine Learning or Azure Databricks enables such complex deep learning models to become a viable option. The journey to reach the ultimate customer model is not without difficulties though. From “simple” problems like encoding categorical features with thousands of distinct values up to the difficult task of designing efficient deep learning encoders, there are many challenges out there. The session will help you better understand them and implement efficient solutions. Packed with lots of deep learning demos, it builds on Ciprian’s real-world expertise he gained building advanced customer models for a wide range of customers and verticals.

  • 16:25 - 16:55
    Deep Learning has been a massive success for a time now but the industry is moving towards constrained environments such as mobile or real-time systems where inference time, model size or power efficiency are first class requirements. Model compression through Knowledge distillation (Dark knowledge) promises to try and solve the above with a teacher - student paradigm. Can a student overcome its teacher?

  • 17:10 - 17:40
    Data Science and DevOps teams are aligned around a common goal: putting AI applications into production. Nevertheless, the lack of shared practices and tooling makes it difficult to bridge the gap between these two approaches. This talk examines some widespread hurdles to operationalisation across the entire Data Science lifecycle, from exploratory data analysis and modelling through to deployment and monitoring. We will compare and contrast ML development approaches with traditional DevOps best practices, discussing the applicability of the latter in Data Science. We will also cover some implementation details using both open-source and commercial solutions.

  • 09:15 - 09:45
    How can organizations optimize their sales channels and product targeting? Can you automate first line of support and improve customer satisfaction? How do I protect my online payment channel from frauds? These and more questions are addressed in this session about building smarter business applications that leverage the capability of Artificial Intelligence technologies. Come and see in practice Azure Machine Learning, Microsoft Cognitive Services and the Bot Framework for building intelligent applications that analyze data and predict better outcomes for businesses.

  • 09:45 - 10:15
    Online advertising is an essential component of any business strategy. Every year, the investment on online advertising grows for mobile and web. To meet this growing demand, many online ad publishers build their own ad serving platforms to manage and deliver ad inventory. As a consequence, the need of click prediction systems are crucial for the success of such systems. In this talk, I will introduce the importance of click prediction in ad servers and some of the challenges found when building click prediction models. I then explore some of the most simple algorithms used to tackle click prediction as well as some of the parameters that mostly impact performance.

  • 10:30 - 11:00
    An insight into the creation of a graph based, quantum inspired neural network that outperforms the Big Players (Google, IBM, Microsoft and Alexa) in Natural Language Processing.

  • 11:00 - 11:30
    Common approaches to measuring how well a new model performs can be highly misleading, and simply picking the one with the highest precision/recall can ruin your product. I'll explain how and look at some simple approaches you can use in your workflow to combat this which we use in Dimensions, as well as some larger organisational changes that may be required.

  • 11:45 - 12:15
    Recommender systems are used to increasingly shape your behavior online, recommending you everything from the clothes you wear to the music you listen to, to the people you become friends with. In this talk we will take a look at the major types of recommender systems, how they work including advantages and disadvantages, and how they can be used effectively.

  • 12:15 - 12:45
    This talk will give an introduction to neural networks and how they are used for machine translation. The primary goal of the talk is to provide a deep enough understanding of NMT that so that the audience can appreciate the strengths of weaknesses of the technology. The talk starts with a brief introduction to standard feed-forward neural networks (what they are, how they work, and how they are trained), this is followed by an introduction to word-embeddings (vector representations of words) and then we introduce recurrent neural networks. Once these fundamentals have been introduced we then focus in on the components of a standard neural-machine translation architecture, namely: encoder networks, decoder language models, and the encoder-decoder architecture.

  • 13:45 - 14:15
    At first sight, forecasting looks like another regression problem; however, time series pose unique statistical challenges that require specialised models. Starting with some common mistakes (and fixes!) in time series analysis, we will then introduce an array of techniques from classical ARIMA to neural networks, with a short Bayesian detour. Different methods will be illustrated and compared using a large spatio-temporal dataset as motivating example. We conclude with some modelling recommendations and strategies to tackle general forecasting problems.

  • 14:15 - 14:45
    Reinforcement Learning is learning what to do – what action to take in a specific situation – in order to maximize some type of reward. It’s one of the most promising areas of Machine Learning today. It plays an important part in some very high-profile success stories of AI, such as mastering Go, learning to play computer games, autonomous driving, autonomous stock trading, and more. In this talk we’ll introduce the main theoretical and practical aspects of Reinforcement Learning, discuss its very distinctive set of challenges, and explore what the future looks like for self-training machines.

  • 15:00 - 15:30
    User satisfaction surveys are a common and powerful tool in helping customer experience teams improve their product, by helping them understand which parts of the user experience contribute most to a given outcome. However, they suffer from two disadvantages: first, it is difficult and time-consuming to design good survey questions and to analyze results, and second, convincing many users to complete multi-page, monotonous surveys is difficult and a bad user experience. In this talk, we explore techniques such as clustering, natural language understanding, and summarization in order to enable customer experience teams easily derive insight from a single open-ended question rather than a long sequence of very specific multiple-choice questions.

  • 15:30 - 16:00
    Deep Learning is the buzzword of the day in IT. Fueled by the significant advancements generated by GPUs and lately by FPGAs, deep learning is on the path of becoming ubiquitous. Yet most people are unaware of the fact that the first incarnation of a neural net, the perceptron, has its 60th birthday this year. Once almost deemed as a “dead end”, neural nets, represented by their most preeminent incarnation – the deep learning nets, are coming back into the public spotlight with a vengeance. Join me in this session to discover the inner workings of deep learning networks, their advantages and pitfalls, as well as their areas of applicability. I’ll cover the history and evolution of the field as well as its present state of the art. We’ll talk about the most popular deep learning platforms as well as about how the cloud and the intelligence edge enable together a broad range of scenarios to be addressed.

  • 16:15 - 16:45
    Azure is huge – there are some many choices to make and new options seem to arrive every day. But which to chose? And why? In this session we will explore the various options of doing Artificial Intelligence on Azure and also do a demo of the latest and greatest technology available for you to use today. Even a sneak peak into the future will be provided.

Partners

Location

Location

Chopin Hall - Palas Congress Hall, Palas Street no. 7A, Iași, Romania 700259