NASA Logo, National Aeronautics and Space Administration

29~31 August 2017: Machine Learning Workshop

Steven Zornetzer : Welcoming Remarks

Bio: Steven F. Zornetzer was formerly a neurobiologist and professor of neuroscience interested in the problem of how the brain processes information. Steve has evolved from academic to a creative and dynamic leader and senior executive at NASA's Ames Research Center in Silicon Valley.

Currently serving as Ames' Associate Center Director, he formerly served as Director of Research and prior to that as Director of Information Sciences and Technology at Ames. He was lead author of the influential book, Introduction to Neural and Electronic Networks (Neural Networks: Foundations and Applications, 1995). Recognized for his leadership in revolutionary information technology-based approaches to aerospace and space exploration missions, his interests range from cognitive, perceptual, and neural sciences to integrative and synthetic biology, biological information processing, molecular biology, genetic engineering, and biomedical science.

Before joining NASA in 1997, Steve headed the Life Sciences Directorate for the Office of Naval Research (ONR). In 2008 he received the Presidential Distinguished Executive Award and in 2010 NASA's Outstanding Leadership Medal. He is a driver for NASA Ames' leadership in environmental sustainability. Most recently he has focused his efforts on climate change and built environment.

Leading the design and construction of the highest performing, net energy positive building in the Federal Government, Steve's vision is to design an intelligent adaptive building control system that will optimize dynamically the building's energy performance and working environment based upon the real-time demands of its occupants. Additionally, the control system is being designed to learn from its own past performance and improve over time.

Dr. Steven Zornetzer is the spokesperson for the NASA Sustainability Base.


29 (Tuesday)

Peter Norvig: Keynote Speaker
"The Practice of Machine Learning."

Abstract: Machine Learning has proven to be an effective tool across many science and technology fields. Successful uses require skill in a number of areas: theoretical algorithms, the integration of software packages, good data handling practice, scientific subject area expertise, and general software engineering project management. This talk explores how to put these together.

Bio: Peter Norvig is a Director of Research at Google Inc. Previously, he was head of Google's core search algorithms group, and of NASA Ames' Computational Sciences Division, making him NASA's senior computer scientist. He received the NASA Exceptional Achievement Award in 2001. He has taught at the University of Southern California and the University of California at Berkeley, from which he received a Ph.D. in 1986, and the distinguished alumni award in 2006. He was co-teacher of an Artificial Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes.
Publications include:

  • Artificial Intelligence: A Modern Approach (the leading textbook in the field)
  • Paradigms of AI Programming: Case Studies in Common Lisp
  • Verbmobil: A Translation System for Face-to-Face Dialog
  • Intelligent Help Systems for UNIX
Peter is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic sentence. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences.

Vipin Kumar: Keynote Speaker
"Research Funded by the NSF Expeditions in Computing Program and NASA."

Abstract: The climate and earth sciences have recently undergone a rapid transformation from a data-poor to a data-rich environment. In particular, massive amount of data about Earth and its environment is now continuously being generated by a large number of Earth observing satellites as well as physics-based earth system models running on large-scale computational platforms. These massive and information-rich datasets offer huge potential for understanding how the Earth's climate and ecosystem have been changing and how they are being impacted by humans actions. This talk will discuss various challenges involved in analyzing these massive data sets as well as opportunities they present for both advancing machine learning as well as the science of climate change in the context of monitoring the state of the tropical forests and surface water on a global scale.

Bio: Vipin Kumar is a Regents Professor and holds William Norris Chair in the department of Computer Science and Engineering at the University of Minnesota. His research interests include data mining, high-performance computing, and their applications in Climate/Ecosystems and health care. He is currently leading an NSF Expedition project on understanding climate change using data science approaches. He has authored over 300 research articles, and co-edited or coauthored 10 books including the widely used text books, Introduction to Parallel Computing, and Introduction to Data Mining.

Vipin has served as chair/co-chair for many international conferences and workshops in the area of data mining and parallel computing, including 2015 IEEE International Conference on Big Data, IEEE International Conference on Data Mining (2002), and International Parallel and Distributed Processing Symposium (2001). He is a Fellow of the ACM, IEEE, AAAS, and SIAM. Vipin's foundational research in data mining and high performance computing has been honored by the ACM SIGKDD 2012 Innovation Award, which is the highest award for technical excellence in the field of Knowledge Discovery and Data Mining (KDD), and the 2016 IEEE Computer Society Sidney Fernbach Award, one of IEEE Computer Society's highest awards.

Michael Lowry: Organizer
"Machine Learning Workshop Objectives and Overview."

Workshop Objectives: Machine Intelligence will revolutionize NASA’s missions. Machine Intelligence has three legs: computer hardware; algorithms for cognition, perception and intelligent execution; and machine learning. All three have had orders of magnitude advances since the inception of Artificial Intelligence in the 1950s. This workshop will explore ongoing applications of Machine Learning to all of NASA’s mission directorates, and look forward to new opportunities. Sessions will feature both speakers describing successes and lessons learned from past applications of Machine Learning, and also domain experts providing descriptions of mission needs and future opportunities. Breakout sessions will bring together domain experts, computer scientists, and managers to formulate next steps for machine learning for NASA.

Bio: Dr. Michael Lowry is the principle investigator for NASA Aeronautic’s Autonomy Operating System (AOS) for UAVs. The AOS team is attempting to develop an autonomous pilot that can operate in the National Air Space with proficiency approaching that of a human pilot. Dr. Lowry serves at Ames as NASA’s Chief Technologist for Reliable Software Engineering. He is the steering committee chair for the NASA Formal Methods symposium, and an IEEE Automated Software Engineering Fellow. Lowry received his MS/BS from MIT and his PhD in 1989 from Stanford University in Artificial Intelligence. He co-authored one of the first papers on explanation-based algorithms for machine learning: "Learning Physical Descriptions from Functional Descriptions, Examples, and Precedents."

Nikunj Oza: Organizer
"Data Sciences at NASA."

Abstract:This talk will give a broad overview of work at NASA in the space of data sciences, data mining, machine learning, and related areas at NASA. This will include work within the Data Sciences Group at NASA Ames, together with other groups at NASA and university and industry partners. We will delineate our thoughts on the roles of NASA, academia, and industry in advancing machine learning to help with NASA problems.

Bio: Nikunj Oza is the leader of the Data Sciences Group at NASA Ames Research Center and the Discovery of Precursors to Safety Incidents (DPSI) team which applies data mining to aviation safety. Dr. Oza’s 40+ research papers represent his research interests which include data mining, fault detection, and their applications to Aeronautics and Earth Science. He received the Arch T. Colwell Award for co-authoring one of the five most innovative technical papers selected from 3300+ SAE technical papers in 2005. He received his B.S. in Mathematics with Computer Science from MIT in 1994, and M.S. (in 1998) and Ph.D. (in 2001) in Computer Science from the University of California at Berkeley. and Ph.D. (in 2001) in Computer Science from the University of California at Berkeley.

Deepak Kulkarni: "Models of Weather Impact on Airspace Operations."

Abstract: Flight delays have been a serious problem in the national airspace system costing about $30B per year. About 70% of the delays are attributed to weather and up to two thirds of these are avoidable. Better decision support tools would reduce these delays and improve air traffic management tools. Such tools would benefit from models of weather impacts on the airspace operations. 

Our work has been focused on use of machine learning methods to mine various types of weather and traffic data to develop such models. This talk will describe different types of weather impact models we have developed over the years.

Bio: Dr. Deepak Kulkarni is a senior computer scientist in the Information Sharing and Integration Group in the Intelligent Systems Division at NASA Ames Research Center. Since 1996, Deepak has managed NASA research and development projects in the areas of data mining, health monitoring systems, and collaborative systems. He has been the principal investigator on several long-term projects with successful NASA deployments in engineering and aviation domains. His contributions have been recognized with several NASA Honor and Group Achievement Awards, as well as awards honoring specific work. He has been a member of the editorial board of International Journal of Operations Research and Information Systems since 2010. In 2013, Society of Automotive Engineers (SAE) International recognized his work with an Arch T. Colwell Merit Award. In his spare time, Deepak is also involved in educational outreach activities, including coaching students for national math competitions such as the Math Olympiads.

Bryan Matthews: "Approach to Assessing RNAV STAR Adherence."

Abstract: Flight crews and air traffic controllers have reported many safety concerns regarding area navigation standard terminal arrival routes (RNAV STARs). However, our information sources to quantify these issues are limited to subjective reporting and time consuming case-by-case investigations. This work is a preliminary study into the objective performance of instrument procedures and provides a framework to track procedural concepts and assess design functionality. We created a tool and analysis methods for gauging aircraft adherence as it relates to RNAV STARs. This information is vital for comprehensive understanding of how our air traffic behaves. In this exploratory archival study, we mined the performance of 24 major US airports over the preceding three years. Overlaying radar track data on top of RNAV STAR routes provided a comparison between aircraft flight paths and the waypoint positions and altitude restrictions. NASA Ames Supercomputing resources were utilized to perform the data mining and processing. We assessed STARs by lateral transition path (full-lateral), vertical restrictions (full-lateral/full-vertical), and skipped waypoints (skips). In addition, we graphed aircraft altitudes relative to the altitude restrictions and their occurrence rates. Full-lateral adherence was generally greater than Full-lateral/full-vertical, but the difference between the rates was not always consistent. Full-lateral/full-vertical adherence medians of the 2016 procedures ranged from 0% in KDEN (Denver) to 21% in KMEM (Memphis). Waypoint skips ranged from 0% to nearly 100% for specific waypoints. Altitudes restrictions were sometimes missed by systematic amounts in 1000 ft. increments from the restriction, creating multi-modal distributions. Other times, altitude misses looked to be more normally distributed around the restriction. This tool may aid in providing acceptability metrics as well as risk assessment information.

Bio: Bryan Matthews received his Bachelor’s Degree in Electrical Engineering from Santa Clara University in 2002. He has worked at NASA Ames Research Center since 2001 where he is currently a member of the Data Sciences group. His research involves utilizing advanced algorithms to intelligently mine heterogeneous data sources and address complex problems in the national airspace.

Vijay Janakiraman: "Discovering Precursors to Aviation Safety Incidents: Algorithms and Applications."

Abstract:Precursors give insights into why and how a safety incident occurred and helps in proactively managing Aviation safety risks. Mining precursors in Aviation time series data is challenging because of several reasons including high dimensionality, data heterogeneity, high class imbalance and lack of ground truth information. In this talk, I will introduce the problem of precursor discovery, two algorithms for mining precursors and their application to safety incidents such as high energy landing, stall risk during take-off and go-around.

Bio: Vijay Janakiraman is a member of the NASA University Affiliated Research Center (UARC) and the Data Sciences Group in the Intelligent Systems Division at NASA Ames Research Center. Vijay researches data mining algorithms and applications to NASA problems involving high-dimensional time series and operational data. His work includes anomaly detection, discovery of precursors and prediction of adverse events in flight operational data, traffic flow management, and network shortest path problems. Vijay is also actively exploring novel deep learning and anomaly detection algorithms that can run fast with less overhead. He collaborates as part of a NASA Center Innovation Fund award in using biologically inspired machine intelligence models and representational learning for space image processing and flight anomaly detection. Vijay also collaborates with the NASA Quantum Artificial Intelligence Lab (QuAIL) exploring quantum algorithms for data mining.

Heather Arneson: "Analysis of Convective Weather Impact on Pre-departure Routing of Flights From Fort Worth Center to New York Center."

Abstract: In response to severe weather conditions, Traffic Managers specify flow constraints and reroutes to route air traffic around affected regions of airspace. Providing analysis and recommendations of available reroute options and associated airspace capacities would assist Traffic Managers in making more efficient decisions in response to convective weather. These recommendations can be developed by examining historical data to determine which previous reroute options were used in similar weather and traffic conditions. This paper describes the initial steps and methodology used towards this goal. The focus of this work is flights departing from Fort Worth Center destined for New York Center. Dominant routing structures used in the absence of convective weather are identified. A method to extract relevant features from the large volume of weather data available to quantify the impact of convective weather on this routing structure over a given time range is presented. Finally, a method of estimating flow rate capacity along commonly used routes during convective weather events is described. Results show that the flow rates drop exponentially as a function of the values of the proposed feature and that convective weather on the final third of the route was found to have a greater impact on the flow rate restriction than other portions of the route.

Bio: Heather Arneson has been a Research Aerospace Engineer at NASA Ames Research Center, Moffett Field, CA since 2011. She received a B.S. in Mechanical and Aerospace Engineering from Cornell University and her M.S. degree and PhD in Aerospace Engineering at the University of Illinois at Urbana-Champaign. Her dissertation was on optimization methods using aggregate models for air traffic management applications. Between her undergraduate and graduate studies, she was a member of the science imaging team for NASA's Mars Exploration Rover Mission. Her current research is in the area of air traffic flow management with a focus on the use of machine learning methods to develop decision support tools.

Shawn Wolfe: "The Inductive and Meta Monitoring Systems (IMS+MMS): Automated Monitoring Techniques for Complex and Autonomous Missions Operations Systems."

Abstract:Monitoring the operations of critical systems is notoriously difficult yet essential for successful human space exploration. We describe the Inductive Monitoring System (IMS) and the Meta Monitoring System (MMS), two interrelated software packages that can be used to monitor such systems. IMS and MMS are “knowledge-free” algorithms in that they do not require any prior knowledge of nominal operations. Instead, both systems learn a model of nominal operations by observing normal operations. We will discuss how IMS and MMS work, the types of data needed to induce such models, and give several examples of past deployments.

Bio: Shawn Wolfe is a member of the Collaborative and Assistant Systems Technical Area in the Intelligent Systems Division at NASA Ames Research Center. His current work includes clinical decision support via information retrieval topics and a medical data architecture for long range exploration missions. During his tenure at Ames, he has been active in the areas of information retrieval, data mining, multi-agent systems, the semantic web, and knowledge management, among others.

David Thompson: "Advances in Medical Analytics Solutions for Autonomous Medical Operations on Long-Duration Missions."

Abstract: A review will be presented on the progress made under STMD/Game Changing Development Program Funding towards the development of a Medical Decision Support System for augmenting crew capabilities during long-duration missions, such as Mars Transit. To create an MDSS, initial work requires acquiring images and developing models that analyze and assess the features in such medical biosensor images that support medical assessment of pathologies. For FY17, the project has focused on ultrasound images towards cardiac pathologies: namely, evaluation and assessment of pericardial effusion identification and discrimination from related pneumothorax and even bladder-induced infections that cause inflammation around the heart. This identification is substantially changed due to uncertainty due to conditions of fluid behavior under space-microgravity. This talk will present and discuss the work-to-date in this Project, recognizing conditions under which various machine learning technologies, deep-learning via convolutional neural nets, and statistical learning methods for feature identification and classification can be employed and conditioned to graphical format in preparation for attachment to an inference engine that eventually creates decision support recommendations to remote crew in a triage setting.

Bio: David Thompson began his career with NASA in November, 1975, at the Jet Propulsion Lab in Pasadena, CA. He has a PhD in Planetary Physics at UCLA. He was Member of the Technical Staff in Planetary Physics, carrying out research in Mars Analog Studies related to origin of fluvial features as observed in Viking Images – a task that was focused both on geomorphic feature identification and on math modeling of requisite fluid dynamics that could be responsible for such features. He also focused on nonlinear viscous behavior of the Mars mantle, with modeling related to isostatic rebound and analog analysis due to rebound on Earth from Pleistocene deglaciation. His modeling of fluid features took him to field programs on current surge-glacier systems in Alaska and the Yukon, with eventual modeling of Antarctic Ice Stream Flow, both in the field in Antarctica and with analytics of theoretical modeling of such complex nonlinear flow. He then supported the NASA Climate Program Office at NASA HQ, as well as other Interagency Climate Modeling Initiatives in DC, during 1982-1988. He was appointed Science Advocate for the IOC for Space Station Utilization. David was also named Deputy to the NASA Chief Scientist during this time, and developed several Innovative Research Program Initiatives, as well as guiding the protocol for what are now NASA NRAs.

After being Advanced to Candidacy for a 2nd PhD in Mathematical Logic at Georgetown University, he opted to return to full-time research at NASA Ames in October 1988. He occupied the position of Lead of Applications for the incipient AI Group at Ames, effectively “marrying” AI technologies into requisite missions in both Space and Aero Projects; meanwhile carrying out his own research across a broad spectrum of topics, including early atmospheric modeling and evolution of the primitive Earth, geologic clues for soil sample analysis on Mars Sample Return Missions (as hypothesized at the time), math modeling of Solar Flares and Coronal Mass Ejections leading to Space Weather predictions, signal identification and extraction from galactic background noise for Gravitational Waves for the LISA data challenge, and most recently, for development of Medical Decision Support Systems (utilizing computer-aided ultrasound image analysis, plus statistical learning theory and inference engine development and support) that may augment crew medical capabilities on long-duration space missions, such as Mars Transit. His presentation at this Workshop is about this Medical Decision Support analysis and the Machine Learning methods embodied in the analytics to create advice to remote crew in a medical triage setting.

Rodney Martin: "Integrated Systems Health Management for Sustainable Habitats."

Abstract: Habitation systems provide a safe place for astronauts to live and work in space and on planetary surfaces. They enable crews to live and work safely in deep space, and include integrated life support systems, radiation protection, fire safety, and systems to reduce logistics and the need for resupply missions. Innovative health management technologies are needed in order to increase the safety and mission-effectiveness for future space habitats on other planets, asteroids, or lunar surfaces. For example, off-nominal or failure conditions occurring in safety-critical life support systems may need to be addressed quickly by the habitat crew without extensive technical support from Earth due to communication delays. If the crew in the habitat must manage, plan and operate much of the mission themselves, operations support must be migrated from Earth to the habitat. Enabling monitoring, tracking, and management capabilities on-board the habitat and related EVA platforms for a small crew to use will require significant automation and decision support software. Traditional caution and warning systems are typically triggered by out-of-bounds sensor values, but can be enhanced by including machine learning and data mining techniques. These methods aim to reveal latent, unknown conditions while still retaining and improving the ability to provide highly accurate alerts for known issues. A few of these techniques will briefly described, along with performance targets for known faults and failures. Specific system health management capabilities required for habitat system elements (environmental control and life support systems, etc.) may include relevant subsystems such as water recycling systems, photovoltaic systems, electrical power systems, and environmental monitoring systems. Sustainability Base, the agency's flagship LEED-platinum certified green building acts as a living laboratory for testing advanced information and sustainable technologies that provides an opportunity to test novel machine learning and controls capabilities. In this talk, key features of Sustainability Base that make it relevant to deep space habitat technology and its use of these kinds of subsystems previously listed will be presented. The fact that all such systems require less power to support human occupancy can be used as a focal point to serve as a testbed for deep space habitats that will need to operate within finite energy budgets.

Bio: Dr. Martin is a senior researcher in the Intelligent Systems Division and acts as the Deputy Data Sciences Group Lead. Over the course of his 12+ years at NASA Ames Research Center, he has worked in the application areas of robotics, data mining for aviation safety and space propulsion, among other areas. He acts as the research lead and facility support manager for Sustainability Base, the agency's flagship LEED-platinum certified green building that acts as a living laboratory for testing advanced information and sustainable technologies. Life support and other safety critical systems that will be essential for missions in support of future advanced and fully sustainable exploration habitats can directly benefit from these efforts. He received his M.S. and Ph.D. degrees in Mechanical Engineering from the University of California at Berkeley and B.S. in Mechanical Engineering from Carnegie-Mellon University. His expertise is in the area of optimal level-crossing prediction, control theory, and machine learning. He is the lead developer for a software tool to be used as the basis for performing comparative analyses for the prediction of adverse events in time-series data using various machine learning techniques. He has also developed tools for the design of alarm systems that employ concepts from optimal level-crossing prediction. He has contributed to over 30 publications in journals, conference papers, or book chapters. He has provided mentorship for over 15 students ranging from undergraduates to Ph.D. candidates, and has been active with graduate level student teams from various universities that contribute to various research activities that support NASA’s goals. He is a Senior Member of AIAA and IEEE, and an Associate Member of ASHRAE.

Adrian Agogino: "Machine Learning for Slow but Steady Interplanetary Construction."

Abstract: For prolonged manned missions to destinations such as the moon and Mars, there is a need for significant infrastructure construction ahead of time, such as habitats and landing pads. Unfortunately we have little experience in remote construction and using conventional methods is likely to be expensive, cumbersome and unreliable. Fortunately these challenges may be overcome by taking advantage of the long lead time for such missions and using teams of small and slow construction robots. We propose using teams of simple autonomous robots for this purpose that would perform continuous construction over a period of many years or even decades. While individual robot reliability will be low over such long time frames, system reliability will be maintained by using machine learning over simulations to achieve coordination and reconfigurations in the event of lost robots.

Bio: Dr. Adrian Agogino is a research scientist at NASA Ames Research Center. His interests are in the fields of complex system control, tensegrity robotics, rocket analysis and multiagent control. He is one of the foremost experts in multiagent coordination of air traffic flow and complex control of robots based on tensegrity structures, successfully utilizing evolution algorithms and reinforcement learning to achieve complex behaviors. He has over 70 publications in the fields of machine learning, soft robotics, rocket analysis, multiagent systems, reinforcement learning, evolutionary systems and visualization of complex systems. He has received awards for publications in both learning and in visualization. Since 2004 he has worked for The University of California Santa Cruz, working on complex systems projects including air traffic flow management, rocket engine analysis, robotics and coordination in multi-rover learning.

==============================

30 (Wednesday)

Piyush Mehrotra: "HPC and Data Analysis at NASA Ames."

Abstract: This talk will describe the High-Performance Computing (HPC) and data analysis & visualization resources available at the NASA Advanced Supercomputing (NAS) Division to enable large scale modeling and simulations required to tackle some of the toughest science and engineering challenges facing NASA today. The talk will also highlight some of the projects that involve analysis and analytics of large data sets.

Bio: Piyush Mehrotra is Chief of the NASA Advanced Supercomputing (NAS) Division at NASA Ames Research Center, and oversees the full range of high-performance computing services for NASA's primary supercomputing center. He also manages NAS's modeling and simulation research and development efforts, which are critical for numerous agency missions. Piyush's key responsibilities are setting high-level objectives for the division, creating and maintaining a productive, high-impact computational facility, and coordinating technical strategy with both NASA Ames and NASA Headquarters management. This work encompasses managing about 150 Research and Development (R&D) scientists, engineers, and support staff comprised of civil servants and contractors. In addition, he is the manager of the NASA Earth Exchange (NEX) Project.

Piyush is a world-renowned expert with over 30 years of R&D experience in parallel programming languages, including compilers and runtime systems for shared and distributed memory systems, and middleware infrastructure for grid environments. He is also experienced in performance characterization and benchmarking of parallel architectures. He has published over 100 articles in journals and conferences, edited two books, and served as editor for several issues of international computer science journals. Previously, Piyush served as Chief of the NAS Computational Technologies Branch and Lead of the Application Performance and Productivity (APP) Group. He has held various other technical and managerial positions since joining NASA Ames Research Center in 2000. During this time, he has received several NASA awards, including the Outstanding Leadership Medal in 2010 and several group achievement awards. Previously, he was an Assistant Professor in Purdue University's Computer Science Department and rose through the ranks at the International Council of Associations for Science Education (ICASE), the Institute for Computer Applications in Science and Engineering, to become a Research Fellow, managing their computer science program. He obtained his Ph.D. in Computer Science from the University of Virginia in 1982.

Johann Schumann: "Program Synthesis for Efficient Machine Learning Algorithms".

Abstract: The development of customized statistical algorithms for data analysis and machine learning can be time-consuming and error prone, in particular, if the requirements cannot be directly met by existing tools or libraries. In this talk, I will present AutoBayes, an open-source tool that has been developed at NASA Ames. Given a compact statistical specification, AutoBayes automatically generates efficient customized algorithms and provides a formal step-by-step derivation of the algorithm. I will demonstrate major features of the tool and discuss the potential of automatic algorithm and code generation for modern and advanced machine learning applications.

Bio: Johann Schumann (Stinger Ghaffarian Technologies - SGT, Inc.) is a Chief Scientist of Computational Sciences and a member of the Robust Software Engineering Group (RSE) in the Intelligent Systems Division at NASA Ames Research Center. Johann has been engaged in research on software and system health management, verification and validation of advanced air traffic control algorithms and adaptive systems, statistical data analysis of air traffic control systems and Unmanned Aerial Systems (UAS) incident data, and the generation of reliable code for data analysis and state estimation. Johann's general research interests focus on the application of formal and statistical methods to improve design and reliability of advanced safety and security-critical software. Johann obtained his Habilitation degree (2000) from the Technische Universität München, Germany, on application of automated theorem provers in software engineering. His Ph.D. thesis (1991) was on high-performance parallel theorem provers.

Cliff Young: "TensorFlow Processing Unit: Hardware for Fast Neural Net Inferencing".

Abstract: With the ending of Moore's Law, many computer architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. The Tensor Processing Unit (TPU), deployed in Google datacenters since 2015, is a custom chip that accelerates deep neural networks (DNNs). We compare the TPU to contemporary server-class CPUs and GPUs deployed in the same datacenters. Our benchmark workload, written using the high-level TensorFlow framework, uses production DNN applications that represent 95% of our datacenters’ DNN demand. The TPU is an order of magnitude faster than contemporary CPUs and GPUs and its relative performance per Watt is even larger. The TPU’s deterministic execution model turns out to be a better match to the response-time requirement of our DNN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, …) that help average throughput more than guaranteed latency. The lack of such features also helps explain why despite having myriad arithmetic units and a big memory, the TPU is relatively small and low power.

Bio: Cliff Young is a member of the Google Brain team, whose mission is to develop deep learning technologies and deploy them throughout Google. He is one of the designers of Google’s Tensor Processing Unit (TPU), which is used in production applications including Search, Maps, Photos, and Translate. TPUs also powered AlphaGo’s historic 4-1 victory over Go champion Lee Sedol. Before joining Google, Cliff worked at D. E. Shaw Research, building special-purpose supercomputers for molecular dynamics, and at Bell Labs.

Guy Katz: "Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks."

Abstract: Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique can also be used to measure a network's robustness to adversarial inputs. Our approach is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than could previously be verified. [Based on joint work with Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer, In Proc. CAV 2017, https://arxiv.org/abs/1702.01135]

Bio: Guy Katz is a postdoctoral fellow at Stanford university, working with Prof. Clark Barrett. He received his Ph.D. at the Weizmann Institute in 2015. His research interests lie at the intersection between Software Engineering and Formal Methods, and in particular in the application of formal methods to lightweight parallel programming models. He has also been working on formally verifying the correctness of software generated via machine learning. In 2018 he will be joining the faculty at the Hebrew University of Jerusalem.

Tim Randles: "Charliecloud - A Lightweight Linux Container for User-Defined Software Stacks in HPC."

Abstract: Compute resources from the desktop to supercomputing are seeing increased demand for user-defined software stacks (UDSS), instead of or in addition to the sysadmin-provided stack. These UDSS support user needs such as complex dependencies or build requirements, externally imposed configurations, portability, and consistency. The challenge for admins is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance.

This talk will introduce and demonstrate Charliecloud, which uses the Linux user and mount namespaces to run containers, built using industry-standard Docker or other procedures, with no privileged operations or daemons on production resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in just 900 lines of code. Charliecloud promises to bring an industry-standard UDSS workflow to existing, minimally altered compute resources.
Charliecloud is an open-source product and available now. It is currently installed on LANL's Woodchuck and Darwin clusters.

Bio: Tim Randles has been working in scientific, research, and high-performance computing for many years, first in the Department of Physics at the Ohio State University, then at the Maui High Performance Computing Center, and most recently as a member of the HPC Division at Los Alamos National Laboratory. His current work is focused on the convergence of the high performance and cloud computing worlds, specifically leveraging cloud computing software and methods to enhance the flexibility and usability of world-class supercomputers.

Natalia Vassilieva: "Deep Learning Cookbook: Technology Recipes to Run Deep Learning Workloads."

Abstract: Deep Learning is a key enabling technology behind the recent revival of Artificial Intelligence. It is already adopted by a handful of tech giants and industry leaders for computer vision, speech recognition and other tasks. There is a lot of interest in this technology from many other players across industries. However, many of those players do not have the expertise and resources of tech giants to make informed decisions on optimal hardware and software configurations to run deep learning workloads efficiently. It is a common wisdom today, that to start a deep learning exploration one needs a GPU-based system and one of the existing open source deep learning frameworks. But which GPU box to choose? How many of them to put in a cluster? Which framework to pick? If a particular GPU system is enough for one deep learning workload, will it be enough for a different one?

Answers to these questions are not obvious. That’s why we have decided to create a “book of recipes” for deep learning workloads. Our Deep Learning Cookbook is based on a massive collection of performance results for various deep learning workloads on different hardware/software stacks, and analytical performance models. This combination enables us to estimate the performance of a given workload and to recommend an optimal hardware/software stack for that workload. Additionally, we use the Cookbook to detect bottlenecks in existing hardware and to guide the design of future systems for artificial intelligence and deep learning.

Bio: Natalia Vassilieva is a senior scientist at the Analytics Lab and the manager of HP Labs Russia. Prior to that she worked as a software developer for several IT companies and taught at St. Petersburg State University. She graduated with honor from Saint Petersburg State University (SPSU), Mathematics and Mechanics Department, Chair of Software Engineering in 2002; held an intern position in the Multimedia Information Modeling and Retrieval group at the research laboratory CLIPS-IMAG (Grenoble, France) in 2002-2003; obtained PhD in Computer Science from SPSU in 2010.

Milind Tambe: "AI for Social Good: The Role of Human-Machine Partnership."

Abstract: Discussions about the future negative consequences of AI sometimes drown out discussions of the current accomplishments and future potential of AI in helping us solve complex societal problems. At the USC Center for AI in Society, CAIS, our focus is on advancing AI research and building decision aids in tackling wicked problems in society. This talk will highlight three areas of ongoing work at CAIS and the role of human-machine partnership in this work. First, I will focus on the use of AI for assisting low-resource communities, such as homeless youth. Harnessing the social networks of such youth, I will outline our advances in influence maximization algorithms to help more effectively spread health information, such as for reducing risk of HIV infections. These algorithms have been piloted in homeless shelters in Los Angeles, and have shown significant improvements over traditional methods. Second, I will outline the use of AI for protection of forests, fish, and wildlife; learning models of adversary behavior allows us to predict poaching activities and plan effective patrols to deter them; I will discuss concrete results from tests in a national park in Uganda that have led to removal of snares and arrests of poachers, potentially saving endangered animals. Finally, I will focus on the challenge of AI for public safety and security, specifically for effective security resource allocation. I will also briefly discuss our "security games" framework -- based on computational game theory -- which has led to decision aids that are in actual daily use by agencies such as the US Coast Guard, the US Federal Air Marshals Service and local law enforcement agencies to assist the protection of ports, airports, flights, and other critical infrastructure. I will also highlight a number of other projects at CAIS, outlining the potential for machine learning and the essential role of human-machine partnership in building AI for social good.

Bio: Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California (USC) and the Founding Co-Director of CAIS, the USC Center for Artificial Intelligence in Society, where his research focuses on "AI for Social Good". He is a fellow of AAAI and ACM, as well as recipient of the ACM/SIGAI Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, INFORMS Wagner prize in Operations Research, Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa foundation award, RoboCup scientific challenge award, and other awards including the Orange County Engineering Council Outstanding Project Achievement Award, USC Associates award for creativity in research and USC Viterbi use-inspired research award.

Prof. Tambe has contributed several foundational papers in Artificial Intelligence in areas such as intelligent agents and computational game theory; these papers have received over a dozen best paper and influential paper awards at conferences such as AAMAS, IJCAI, IAAI and IVA. In addition, Milind's pioneering real-world deployments of security games has led him and his team to receive meritorious commendations from the US Coast Guard Commandant, LA Airport Police, and the US Federal Air Marshals Service. For his teaching and mentoring Milind has received the USC Steven B. Sample Teaching and Mentoring award; to date he has graduated 26 PhD students and mentored 10 postdocs. Milind has also co-founded a company based on his research, Avata Intelligence, where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.

Karen Myers: "Learning to Help Human Problem Solvers."

Abstract: The field of AI was motivated originally by the objective of automating tasks performed by humans. While advances in machine learning have enabled impressive capabilities such as self-driving vehicles, more cognitive tasks such as planning or design have resisted full automation because of the vast amounts of knowledge and commonsense reasoning that they require. This talk describes a line of research aimed at developing AI systems designed to augment rather than replace human capabilities, leveraging machine learning and other AI technologies. It also presents several successful applications of the research in deployed systems.

Bio: Dr. Karen Myers is a program director and principal scientist in SRI International's Artificial Intelligence Center, where she leads a team focused on developing intelligent systems that facilitate man-machine collaboration. Her research interests include autonomy, multi-agent systems, automated planning and scheduling, personalization technologies, and mixed-initiative problem solving. In addition to being widely published, she has overseen several successful transitions of her research into use both by the U.S. Government and within commercial settings. She has been an associate editor for the journal Artificial Intelligence, a member of the editorial boards for the Journal of Artificial Intelligence Research and the ACM Transactions on Intelligent Systems and Technology, and a member of the executive council for the Association for the Advancement of Artificial Intelligence. Dr. Myers has a Ph.D. in computer science from Stanford University.

Kamalika Das: "ASK-the-Expert: Active Learning Based Knowledge Discovery Using the Expert."

Abstract: Often, the manual review of large data sets, either for purposes of labeling unlabeled instances or for classifying meaningful results from uninteresting (but statistically significant) ones is extremely resource intensive, especially in terms of subject matter expert (SME) time. Use of active learning has been shown to diminish this review time significantly. However, since active learning is an iterative process of learning a classifier based on a small number of SME-provided labels at each iteration, the lack of an enabling tool can hinder the process of adoption of these technologies in real-life, in spite of their labor-saving potential. In this talk we present ASK-the-Expert, an interactive tool that allows SMEs to review instances from a data set and provide labels within a single framework. ASK-the-Expert is powered by an active learning algorithm for training a support vector machine based classifier in the back-end. Its uniqueness lies in its ability to learn from explanations provided by the expert regarding the anomalousness of the instances reviewed. We demonstrate this system in the context of an aviation safety application, but the tool can be adopted to work in other domains where labels are difficult to obtain.

Bio: Kamalika Das is a senior researcher in the Data Sciences Group in the Intelligent Systems Division at NASA Ames Research Center. She manages the Data Science & Machine Learning task at the NASA University Affiliated Research Center (UARC), University of California at Santa Cruz. She is also the Principal Investigator/Co-Investigator on two NASA-funded projects on machine learning for climate science. Kamalika’s research interests are building machine learning solutions for problems in aviation, climate science, and computational sustainability. In the past, she has lead NASA's efforts on the The Defense Advanced Research Projects Agency (DARPA) funded Anomaly Detection at Multiple Scales (ADAMS) project for anomaly detection in social networks that identifies potentially malicious entities. She has more than 35 peer-reviewed publications in top tier conferences and journals. Kamalika has a Ph.D. in computer science with specialization in developing scalable data mining solutions for analyzing "big data" in distributed computing environments.

Alonso Vera: "What Machines Need to Learn to Support Human Problem-Solving."

Abstract: In the development of intelligent systems that interact with humans, there is often confusion between how the system functions with respect to the humans it interacts with and how it interfaces with those humans. The former is a much deeper challenge than the latter — it requires a system-level understanding of evolving human roles as well as an understanding of what humans need to know (and when) in order to perform their tasks. This talk will focus on some of the challenges in getting this right as well as on the type of research and development that results in successful human-autonomy teaming.

Bio: Dr. Alonso Vera is Chief of the Human Systems Integration Division at NASA Ames Research Center. His expertise is in human-computer interaction, information systems, artificial intelligence, and computational human performance modeling. He has led the design, development and deployment of mission software systems across NASA robotic and human space flight missions, including Mars Exploration Rovers, Phoenix Mars Lander, ISS, Constellation, and Exploration Systems. Dr. Vera received a Bachelor of Science with First Class Honors from McGill University in 1985 and a Ph.D. from Cornell University in 1991. He went on to a Post-Doctoral Fellowship in the School of Computer Science at Carnegie Mellon University from 1990-93.

Kamalika Das: "Using Machine Learning to Study the Effects of Climate on the Amazon Rainforests."

Abstract: The Amazonian forests are a critical component of the global carbon cycle, storing about 100 billion tons of carbon in woody biomass, and accounting for about 15% of global net primary production and 66% of its inter-annual variability. There is growing concern that these forests could succumb to precipitation reduction in a progressively warming climate causing extensive carbon release and feedback to the carbon cycle. Contradicting research, on the other hand, claims that these forests are resilient to extreme climatic events. In this work we describe a unifying machine learning and optimization based approach to model the dependence of vegetation in the Amazon on climatic factors such as rainfall and temperature in order to answer questions about the future of the rainforests. We build a hierarchical regression tree in combination with genetic programming based symbolic regression for quantifying the climate-vegetation dynamics in the Amazon. The discovered equations reveal the true drivers of resilience (or lack thereof) of these rainforests, in the context of changing climate and extreme events.

Bio: Kamalika Das is a senior researcher in the Data Sciences Group in the Intelligent Systems Division at NASA Ames Research Center. She manages the Data Science & Machine Learning task at the NASA University Affiliated Research Center (UARC), University of California at Santa Cruz. She is also the Principal Investigator/Co-Investigator on two NASA-funded projects on machine learning for climate science. Kamalika’s research interests are building machine learning solutions for problems in aviation, climate science, and computational sustainability. In the past, she has lead NASA's efforts on the The Defense Advanced Research Projects Agency (DARPA) funded Anomaly Detection at Multiple Scales (ADAMS) project for anomaly detection in social networks that identifies potentially malicious entities. She has more than 35 peer-reviewed publications in top tier conferences and journals. Kamalika has a Ph.D. in computer science with specialization in developing scalable data mining solutions for analyzing "big data" in distributed computing environments.

Sangram Ganguly: "Scaling Deep Learning Models to High Resolution Satellite Image Classification on the NASA Earth Exchange Platform."

Abstract: NASA Earth Exchange (NEX) is a collaborative supercomputing facility housed in the NASA Ames High-end Computing Facility. Some of the key projects inside of NEX use computational methods, physical models and new analytical techniques to derive insights from massive data generated from a suite of satellite-derived imagery products. These datasets are geared towards mission-centric requirements, however, they are widely used by the scientific research community, stakeholders and commercial companies for a number of applications. We at NEX have invested on new hardware technologies and state-of-art machine learning models to classify and segment land cover objects from very high resolution satellite and airborne imagery and scaled it at continental scales by processing almost a quarter million image scenes every year to derive features that are critical for decision making for a number of government agencies. Here we will showcase some of the work that is being currently performed inside of NEX on the application of deep learning algorithms for land cover classification, crop segmentation, and climate downscaling.

Bio: Sangram Ganguly is a research scientist in the Biosphere Science Branch at NASA Ames Research Center and at the Bay Area Environmental Research Institute. His work leverages expertise across a range of disciplines, including cloud computing solutions for big data science and analytics, machine learning, advanced satellite remote sensing and image analytics, and climate sciences. Sangram did his Ph.D at Boston University; prior to that, he graduated with an Integrated Masters (B.S. & M.S.) degree in geosciences from the Indian Institute of Technology (IIT), Kharagpur, India in 2004. Sangram is an active panelist for the National Science Foundation and NASA carbon and ecosystem programs, as well as a science team member for the NASA Carbon Monitoring System Program. His research has been highlighted in mainstream news media and he is the recipient of five NASA achievement in the fields of ecosystem forecasting, climate science, and remote sensing. Sangram is also a co-founding member of the NASA Earth eXchange (NEX, https://nex.nasa.gov) Collaborative and Supercomputing Facility at NASA Ames, and a founding member and developer of the OpenNEX Platform (https://nex.nasa.gov/opennex).

Grey Nearing: "Toward Confirmatory Data Analytics for Process-Based Hydrology Models."

Abstract: I take a machine learning and information-theoretic perspective to the problem of evaluating complex systems models. This talk will outline a set of hypothesis tests, methods for uncertainty quantification, and diagnostic procedures for using large data sets to identify and quantify particular areas for potential improvement to highly coupled dynamical systems models. Machine learning is used to set performance benchmarks, and also to extract diagnostic information. I’ll provide several examples related to hydrology and land surface modeling.

Bio: Grey Nearing received his PhD in 2013 from the Hydrology and Water Resources Department at the University of Arizona. He was a contractor in the Hydrological Sciences Lab at NASA Goddard Space Flight Center for four years. For the past two years, Grey has held a half-time appointment at the National Center for Atmospheric Research. He started as an assistant professor in the Department of Geological Sciences at the University of Alabama at the beginning of this semester.

Stefano Ermon: "Artificial Intelligence for Sustainability."

Abstract: Recent technological developments are creating new spatio-temporal data streams that contain a wealth of information relevant to sustainable development goals. Modern AI techniques have the potential to yield accurate, inexpensive, and highly scalable models to inform research and policy. As a first example, I will present a machine learning method we developed to predict and map poverty in developing countries. Our method can reliably predict economic well-being using only high-resolution satellite imagery. Because images are passively collected in every corner of the world, our method can provide timely and accurate measurements in a very scalable end economic way, and could revolutionize efforts towards global poverty eradication. As a second example, I will present some ongoing work on monitoring food security outcomes.

Bio: Stefano Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and the Woods Institute for the Environment. Stefano's research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability.

James MacKinnon: "Detecting Wildfires in MODIS data using Deep Neural Networks."

Abstract: Wildfires are destructive to both life and property, which necessitates an approach to quickly and autonomously detect these events from orbital observatories. This talk will introduce a neural network based approach for classifying wildfires in MODIS multispectral data, and will show how it could be applied to a constellation of low-cost CubeSats. The approach combines training a deep neural network on the ground using high performance consumer GPUs, with a highly optimized inference system running on a flight-proven embedded processor. Normally neural networks execute on hardware orders of magnitude more powerful than anything found in a space-based computer, therefore the inference system is designed to be performant even on the most modest of platforms. This implementation is able to be significantly more accurate than previous neural network implementations, while also approaching the accuracy of the state-of-the-art MODFIRE data products.

Bio: James MacKinnon is a computer engineer at the NASA Goddard Space Flight Center in the Science Data Processing branch. He received both his B.S. and M.S. at the University of Florida. His most recent work includes developing the payload processing FPGA design for the NASA-developed CeREs CubeSat, and being a principal investigator on an internal R&D project with the goal of designing a neural network for detecting wildfires from multispectral imagery. His expertise includes FPGA development for high-performance, space-based data processing systems, machine learning for science data, and reliable software design.

Hamed Valizadegan: "Machine learning For Space Projects: Example Engineering and Science Case Studies."

Abstract: The success of the space projects depends much on our ability to understand and analyze their collected science and engineering data. In the science domain, the amount of collected data is so very large that requires building automatic tools to make sense of them. In this domain, often the data is not annotated well and/or there is not enough representative features for effective model construction. In the engineering domain, there are components that are designed and built for the first time and there is a limited domain knowledge available for them. This makes the hand-codes physics-based models less accessible and models that can utilize data for prediction more desirable. However, often there is a small number of working and failed units to learn from. In this talk, I provide examples of both cases and show how machine learning can help in these imperfect scenarios. First, I use our efforts in life time prediction of Fine Guidance Sensors of Hubble Space Telescope as an example to demonstrate how little domain knowledge can help us develop effective machine learning models when the data availability is scarce. And then, I demonstrate our experience of using machine learning to classify the transit-like signals of Kepler spacecraft when annotation is imperfect and data features are not representative.

Bio: Holder of a PhD in computer science with focus on machine learning and data mining from Michigan State University, Hamed Valizadegan joined NASA Ames Research Center (UARC) as a machine learning research scientist in 2013. At Ames, he has been involved with multiple projects including Hubble Space Telescope, Kepler mission and ASRS aviation safety reports! Before joining NASA Ames, he spent three years at University of Pittsburgh conducting research in Medical Informatics. He has published in prestigious venues such as International Conference on Neural Information Processing Systems (NIPS), ACM SIGKDD conference on Knowledge discovery and Data Mining (KDD), and Artificial Intelligence and Statistics (AISTATS).

Nick Kern: "Surrogate Modeling Solutions for Cosmological Parameter Estimation of Future HI Radio Intensity Mapping Surveys ."

Abstract: Next-generation low-frequency radio telescopes are aiming to tomographically map out the 3D structure of primordial hydrogen in the universe during the Epoch of Reionization: the era when the first generation of galaxies generated UV radiation that ionized the surrounding diffuse neutral hydrogen. The potential scientific return of such 3D tomographic maps are immense, however, connecting these observations to underlying physical cosmological parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high dimensional and weakly constrained parameter space. One solution is to use machine learning algorithms to learn the behavior of the sophisticated numerical simulation using a simplified surrogate model and then use the surrogate while performing one's parameter constraints. In this work, we use Gaussian Process regressors to create surrogate models for a numerical simulation of the Epoch of Reionization. We then use these to provide a parameter constraint forecast for the second-generation HERA experiment currently being built in the Karoo desert in South Africa, showing that HERA will nominally place strong constraints on astrophysical parameters and may help strengthen constraints on the amplitude of density fluctuations set by the Planck CMB mission.

Bio: Nick Kern is a PhD student in the Astronomy Department at the University of California, Berkeley. He holds an MA in Astronomy from UC Berkeley, and a BS in Physics and Astrophysics from the University of Michigan. He currently works with the radio astronomy lab at Berkeley on the HERA experiment, which is aiming to map out the distribution of neutral hydrogen in the early universe. His recent work involves using surrogate modeling techniques to relate computationally expensive simulations to future HERA observations in order to perform cosmological parameter estimation.

Sean MacGregor: "Interdisciplinary Collaboration: Finding new Ways of Extracting Scientific Insights from Solar Observations."

Abstract: Recent advances in machine learning algorithms provide a new toolset for extracting scientific insights from NASA data. Effectively leveraging these advances requires interdisciplinary collaboration between researchers that know the data and physical context (e.g., planetary scientists/astrophysicists/heliophysicists), and machine learning researchers than can bring outside expertise and new techniques.

One collaboration case study comes from NASA's Frontier Development Lab (FDL), a public-private partnership between the agency and industry partners (including the SETI Institute, IBM, Nvidia, Intel, kx, & Lockheed Martin). During an 8 week intensive research collaboration facilitated by the FDL, a team of 5 heliophysicists and machine learning researchers developed a neural network connecting solar UV images taken by SDO/AIA into forecasts of maximum x-ray emissions. In addition to potentially changing the practice of operational flare forecasting, the model has shown that it learns structure in the data that may inform new theoretically rigorous solar models and our understanding of flaring mechanisms.

Bio: Sean defended his Machine Learning PhD in June at Oregon State University. His research centered on sequential decision making (i.e., reinforcement learning) under the supervision of Thomas Dietterich. Specifically, Sean solved problems associated with bringing machine learning methods to public policy, including the optimization and visualization of forest wildfire suppression decisions. He is published in the academic literatures for work in machine learning, surrogate modeling, visual analytics, and human-computer interaction.

Continuing his penchant for interdisciplinary research, Sean spent the summer at NASA Ames to work with heliophysicists at the Frontier Development Lab (FDL). The team's 8 weeks of work resulted in a deep learning framework for heliophysics research, which shows promising results in the problem of solar flare forecasting.

Outside his own research, Sean is also the technical manager of the IBM Watson AI XPRIZE -- a multi-year $5 million prize for solving grand challenges with artificial intelligence.

Sean is originally from San Diego and enjoys wave and river sports. He is currently in a passive job search that will accelerate in October-- following the publication of a final dissertation chapter, the FDL work, and the completion of the first year of the AI XPRIZE.

Mark Cheung: "Heliophysics: Data, Science."

Abstract: The convergence of three trends, namely (1) open access scientific big data, (2) open source data analytics, and (3) machine learning (ML) and artificial intelligence (AI) promises to accelerate the rate of scientific discovery and the life cycle of operations to research / research to operations (O2R / R2O) in the Heliophysics discipline. This talk will showcase some recent applications of ML/AI to Heliophysics problems, including (a) prediction of the geomagnetic field variability in response to the solar wind, (b) spectropolarimetry for measuring the magnetic field on the solar surface, and (c) using deep learning to perform compressed sensing on extreme UV images of the solar corona. Some of the work in this presentation was made possible by NASA's Frontier Development Lab, a public-private partnership between the agency and industry partners (including the SETI Institute, IBM, Nvidia, Intel, kx & Lockheed Martin) whose mission is to use artificial intelligence to tackle problems related to planetary defense.

Bio: Mark Cheung is a Staff Physicist at Lockheed Martin Solar & Astrophysics Lab and a Visiting Scholar at Stanford University. He is Principal Investigator for the Atmospheric Imaging Assembly (AIA) instrument onboard NASA’s Solar Dynamics Observatory (SDO) and a Co-Investigator for NASA’s Interface Region Imaging Spectrograph (IRIS) mission. In addition, as Principal Investigator for one of NASA's Heliophysics Grand Challenges Research grants, he leads a team tasked to develop massively parallel numerical models to simulate solar flares and eruptions. He served as Heliophysics Mentor for NASA's Frontier Development Lab in 2017.

Madhulika Guhathakurta: "NASA FDL Overview and Status update"

Abstract: NASA FDL is an applied research accelerator based at NASA ARC and the SETI Institute, established to maximize the opportunity offered by proximity to technologies and capacities emerging in academia and the private sector - particularly Silicon Valley. Partners such as IBM, Intel, Nvidia, Lockheed Martin and Luxembourg Space Resources, have provided capital, expertise and vast computing resources required for rapid experimentation and iteration applied to data-intensive challenge topics.

FDL has successfully demonstrated the potential for interdisciplinary approaches at the Phd level, paired with public / private partnership in the field of narrow AI (machine learning, DNNs and a broad spectrum of other emerging techniques) to tackle challenges in topics such as planetary defense, heliophysics and lunar prospecting.

FDL researchers have already developed promising applications using Deep Learning (CNNs), machine vision, dimensionality reduction (t-SNEs), Variational Auto-Encoding (VAEs), adversarial approaches and bayesian optimization over extremely accelerated timeframes.

The FDL Steering Team is looking to replicate the success of 2016 and 2017 by repeating the formula with improvements to the format. The goal of FDL 3.0 is to further industrialize promising approaches of potential use to NASA, through deliberate ‘branching’ and building on the work of previous FDLs to sustain a growing community of expertise in artificial intelligence applications to the planetary sciences.

Bio: Madhulika (Lika) Guhathakurta
Lead Program Scientist, Eclipse 2017, NASA Headquarters
On Detail at NASA Ames, Program Scientist, New Initiatives

As a NASA astrophysicist, Dr. Madhulika Guhathakurta (also known as Lika) has had the opportunity to work as a scientist, mission designer, instrument builder, directing and managing science programs and teacher and spokesperson for NASA's mission and vision in the Heliophysics Division. Occasionally, she performs all of these roles in a single day. Before joining NASA Headquarters in December of 1998, Lika's career has focused on studying the importance of the scientific exploration of space in particular understanding the Sun as a star and its influence on the planet Earth, with research focus on understanding the magneto hydrodynamics of the Sun’s outermost layer, the solar corona.

Lika has been a Co-Investigator on five Spartan 201 missions on aboard space shuttles (STS-56, STS-64, STS-69, STS-87, STS-95) to study the solar corona in white-light and UV radiation and eight eclipse expeditions. She has led the Living with a Star Program for the past 15 years whose goal is to understand and ultimately predict solar variability and its diverse effects on Earth, human technology and astronauts in space, also known as "Space Weather” and has been program scientist for STEREO, SDO, Van Allen Probes, Solar Obriter, Parker Solar Probe and other missions. She created an international consortium of space agency collaborations known as “International Living With a Star” and led the effort to make “Space Weather” as part of United Nations Committee on the Peacefule Uses of Outer Space (UNCOPUOS). She has partnered with the American Museum of Natural History in New York and NASM in DC to produce full dome planetarium and 3D IMAX shows that are being exhibited internationally and used by teachers to excite the next generation of space scientists and helped create graduate level textbooks in heliophysics.. She is also the lead scientist for the 2017 Eclipse and presently on detail to NASA Ames Research Center exploring concepts for new initiatives.

==============================

31 (Thursday)

Michael Little: Keynote Speaker
"AIST: Investing in the Future of Earth Science; 20 Years Out."

Abstract: The Earth Science Technology Office invests $15M a year in advanced Information Technology to provide tools and capabilities for use in the 5-20 year timeframe through the Advanced Information Systems Technology (AIST) Program. Earth Science is beginning to see the value of Machine Learning techniques to help translate large volumes of remote sensing data into a more comprehensive and deeper understanding of natural phenomena. These efforts are starting to coalesce around the concept of Analytic Centers, which are an environment in which all relevant data and the tools to analyze it are available to a particular community. Machine learning and statistical analysis are being funded, in some form, in 12 of 22 AIST16 Projects. Recent lessons learned and needs from the science community will be discussed.

Kai Goebel: Closing Remarks

Bio:Kai Goebel is the Technical Area Lead of the Discovery and Systems Health Technology Area in the Intelligent Systems Division at NASA Ames Research Center. He also coordinates the Prognostics Center of Excellence and is the Technical Lead for Prognostics and Decision Making in NASAs System-wide Safety and Assurance Technologies Project. Prior to joining NASA in 2006, he was a senior research scientist at the General Electric Corporate Research and Development Center since 1997. Kai received his Ph.D at the University of California at Berkeley in 1996. He has carried out applied research in the areas of real-time monitoring, diagnostics, prognostics, and has fielded numerous applications for aircraft engines, transportation systems, medical systems, and manufacturing systems. He holds 18 patents and has co-authored more than 300 technical papers in the field of Integrated Vehicle Health Management (IVHM).

Kai was an adjunct professor of the Computer Science Department at Rensselaer Polytechnic Institute (RPI), Troy, NY, from 1998 – 2005, where he taught classes in Soft Computing and Applied Intelligent Reasoning Systems. He has been the co-advisor of eight Ph.D. students. Kai is also a member of several professional societies, including the American Society of Mechanical Engineers (ASME), the Association for the Advancement of Artificial Intelligence (AAAI), the American Institute of Aeronautics and Astronautics (AIAA), the Institute of Electrical and Electronics Engineers (IEEE), Verein Deutscher Ingenieure (VDI), the Society of Automotive Engineers (SAE), and the Prognostics and Health Management Society (PHM). He was the General Chair of the Annual Conference of the PHM Society, 2009, has given numerous invited and keynote talks, and held many chair positions at the PHM conference and the AAAI Annual meetings series. He is currently a member of the board of directors of the PHM Society and Associate Editor of the International Journal of PHM.

First Gov logo
NASA Logo - nasa.gov