Professor Bing Liu: Continuous Machine Learning
Classic machine learning works by learning a model from a set of training examples. Although this paradigm has been very successful, it requires a large amount of manually labeled data, and it is only suitable for well-defined, static, and narrow domains. Going forward, this isolated learning paradigm is no longer sufficient. For example, it is almost impossible to pre-train intelligent personal assistants, chatbots, self-driving cars, and other robotics systems so that they can intelligently interact with their dynamic environments because it is very difficult for humans to provide labeled examples or any other supervised information to cover all possible scenarios that the systems may encounter in their environments. Thus, such systems must learn on the job by themselves continuously, retain the learned knowledge, and use it to help future learning. When faced with an unfamiliar situation, they must adapt their past knowledge to deal with the situation and learn from it. This general learning capability is one of the hallmarks of the human intelligence. Without this capability, it is probably impossible to build a truly intelligence system. In this talk, I will introduce this learning paradigm and discuss some recent research in this direction.
Bing Liu is a distinguished professor of Computer Science at the University of Illinois at Chicago. He received his Ph.D. in Artificial Intelligence from the University of Edinburgh. His research interests include sentiment analysis, lifelong learning, data mining, machine learning, and natural language processing (NLP). He has published extensively in top conferences and journals. Two of his papers have received 10-year Test-of-Time awards from KDD. He also authored four books: two on sentiment analysis, one on lifelong learning, and one on Web mining. Some of his work has been widely reported in the press, including a front-page article in the New York Times. On professional services, he served as the Chair of ACM SIGKDD (ACM Special Interest Group on Knowledge Discovery and Data Mining) from 2013-2017. He also served as program chair of many leading data mining conferences, including KDD, ICDM, CIKM, WSDM, SDM, and PAKDD, as associate editor of leading journals such as TKDE, TWEB, DMKD and TKDD, and as area chair or senior PC member of numerous NLP, AI, Web, and data mining conferences. He is a Fellow of ACM, AAAI and IEEE.
Rajeev Rastogi: Machine Learning @ Amazon, Director
In this talk, I will first provide an overview of key problem areas where we are applying Machine Learning (ML) techniques within Amazon such as product demand forecasting, product search, and information extraction from reviews, and associated technical challenges. I will then talk about three specific applications where we use a variety of methods to learn semantically rich representations of data: question answering where we use deep learning techniques, product size recommendations where we use probabilistic models, and fake reviews detection where we use tensor factorization algorithms.
is a Director of Machine Learning at Amazon where he is developing ML platforms and applications for the e-commerce domain. Previously, he was Vice President of Yahoo! Labs Bangalore and the founding Director of the Bell Labs Research Center in Bangalore, India. Rajeev is an ACM Fellow and a Bell Labs Fellow. He is active in the fields of databases, data mining, and networking, and has served on the program committees of several conferences in these areas. He currently serves on the editorial board of the CACM, and has been an Associate editor for IEEE Transactions on Knowledge and Data Engineering in the past. He has published over 125 papers, and holds over 50 patents. Rajeev received his B. Tech degree from IIT Bombay, and a PhD degree in Computer Science from the University of Texas, Austin.