Keynote Speakers

Edward A. Lee

edward-lee

Biography

Edward A. Lee is the Robert S. Pepper Distinguished Professor in the Electrical Engineering and Computer Sciences (EECS) department at U.C. Berkeley. His research interests center on design, modeling, and analysis of embedded, real-time computational systems. He is the director of the nine-university TerraSwarm Research Center (http://terraswarm.org), a director of Chess, the Berkeley Center for Hybrid and Embedded Software Systems, and the director of the Berkeley Ptolemy project. From 2005-2008, he served as chair of the EE Division and then chair of the EECS Department at UC Berkeley. He is co-author of nine books (counting second and third editions) and numerous papers. He has led the development of several influential open-source software packages, notably Ptolemy and its various spinoffs. He received the B.S. degree in Computer Science from Yale University, New Haven, CT, in 1979, the S.M. degree in EECS from the Massachusetts Institute of Technology (MIT), Cambridge, in 1981, and the Ph.D. degree in EECS from the University of California Berkeley, Berkeley, in 1986. From 1979 to 1982 he was a member of technical staff at Bell Telephone Laboratories in Holmdel, New Jersey, in the Advanced Data Communications Laboratory. He is a co-founder of BDTI, Inc., where he is currently a Senior Technical Advisor, and has consulted for a number of other companies. He is a Fellow of the IEEE, was an NSF Presidential Young Investigator, and won the 1997 Frederick Emmons Terman Award for Engineering Education.

Professor Lee’s research group studies cyber-physical systems, which integrate physical dynamics with software and networks. Specifically, his group has made major contributions in models of computation with time and concurrency, model-based design and analysis, domain-specific languages, architectures for real-time computing, schedulability analysis, and modeling and programming of distributed real-time systems. His group has been involved with parallel and distributed computing, including models of computation with distributed real-time behaviors, partitioning and scheduling algorithms, backtracking techniques for fault tolerance and recovery, dataflow models of computation, and modeling of sensor networks. His group has made key contributions in semantics of timed and concurrent systems, including domain polymorphism, behavioral type systems, metamodeling of semantics, and comparative models of computation. His group has also worked on blending computing with continuous dynamics and hybrid systems. Prof. Lee himself has an extensive background in signal processing and physical-layer communication systems, and has co-authored five books on these subjects, in addition to four books on embedded systems technologies.

Resurrecting Laplace’s Demon: The Case for Deterministic Models

In 1814, Pierre-Simon Laplace published an argument for determinism in the universe, arguing that if someone (a demon) were to know the precise location and momentum of every atom in the universe, then their past and future values for any given time are completely determined and can be calculated from the laws of classical mechanics. This principle, of course, has been roundly invalidated by quantum mechanics, and yet the laws of classical mechanics continue to be extremely useful for prediction.

In this talk, I will argue that models plays different (complementary) roles in engineering and science, and that deterministic models have historically proved proved even more valuable in engineering than in science. Moreover, I will show that deterministic models for cyber-physical systems, which combine computation with physical dynamics, remain elusive. I will argue that the next big advance in engineering methods must include deterministic models for CPS, and I will show that such models are both possible and practical.

Gordon Blair

gordon_blair

Biography

Gordon Blair is a Distinguished Professor of Distributed Systems in the School of Computing and Communications at Lancaster University and is also an Adjunct Professor at the University of Tromsø in Norway. He has published over 300 papers in his field and is on the PCs of many major international conferences in middleware and distributed systems. He is also chair of the steering committee of the ACM/ IFIP/ Usenix Middleware series of conferences. His current research interests include distributed systems architecture, complex distributed systems, middleware (including reflective and adaptive middleware), model-driven engineering techniques applied to adaptive distributed systems, and the applicability of contemporary distributed systems technologies (including cloud computing and the Internet of Things) to environmental science. He is co-author of the highly successful book Distributed Systems: Concepts and Design by Coulouris, Dollimore, Kindberg and Blair with the 5th edition published in 2011. He is also Director of the HighWire Centre for Doctoral Training, a PhD program taking a cross-disciplinary perspective on innovation as it relates to the digital economy, and is co-editor in chief of Springer’s Journal of Internet Services and Applications.

Grand Challenges, Grand Responses?

The world is facing a period of unprecedented change and the resultant grand challenges demand a coordinated and significant response from all parties including scientists and engineers, academics, policy makers and citizens. Environmental change is arguably the greatest challenge and this talk will focus on the role of digital technology in both the understanding of the complexities of the natural environment and also in determining well-founded adaptation and mitigation strategies for a range of environmental problems including climate change. The talk builds on over five years of experience in working with earth and environmental scientists in providing tools to help move towards a new kind of science as demanded by areas such as climate change, a science that is more open, integrative and collaborative for example. The research includes the application of contemporary areas of digital innovation including cloud computing, Internet of Things technology and emerging areas of data science. But what does this have to do with Models? The systems we are building are highly complex both in terms of the underlying digital technologies and also in the phenomena being observed. New techniques are urgently required to help us to master this complexity and the Models community is the keeper of one of the most powerful tools in this area – namely abstraction. Is the Models community focusing on the right problems though at the right scale and does it have the ambition to take on such grand challenges? Grand challenges demand grand responses – through this keynote, and building on the insights last year from Steve Easterbrook, I would like to provoke a discussion over what this might mean for Models going forward.

Jim Coplien

jim-coplien

Biography

Jim (“Cope”) Coplien has academic degrees in engineering and Computer Science, as well as a Ph.D in Computer Science and a Doktorat i Wetenschappen from Vrije Universiteit Brussel (VUB). He has been a professor at North Central College in the U.S. and a visiting professor at University of Manchester and Flinders University, and held the 2001-2002 Vloebergh Endowed Chair at VUB. He is currently supervising graduate work at RPI in New York. He has enjoyed a broad career that included more than 20 years at AT&T Bell Laboratories as well as subsequent work in electronic design automation, academia, and consulting. He is currently a partner with Gertrud & Cope in Denmark, where his ongoing work covers a wide range of topics including organisational design, lean development process, system and software architecture, and management consulting. His industry contributions include being a co-founder of the software pattern discipline and the introduction of organisational patterns, and his organisational research is one of the foundations of Scrum and of XP. For this work he was also a recipient of the 2013 Rakuten Technology Award in Japan. In the modeling area he is known for his broad work in architecture patterns as well as for being a co-creator, together with Trygve Reenskaug, of the Data-Context-and-Interaction paradigm, whose models hark back to MVC (the “M” indeed stands for “model”) and the operational models of Piaget that gave Alan Kay the initial inspiration for object-oriented programming. He is widely published with several critically acclaimed books to his name as well as scores of articles and papers. More importantly, he still writes code, and his most recent passion is the implementation of the trygve programming language. He lives with his wife Gertrud and son Janos in Denmark. When he grows up, he wants to be an anthropologist.

The Straight Line is Ungodly

Model-based approaches are not thriving, and the blame is usually laid at the feet of the unsophistication of software engineers. Yet software engineers may tacitly know something that the formal folks seldom discuss: Software systems are complex, and it is only simple problems that yield to formal approaches, including most modeling approaches—especially the analytical and computer-supported ones.

A system is complex proportional to its number of distinct, meaningful tops. However, these “tops” are rarely separable, so independent models based on such formalisms are relatively impotent. And such models are abstractions (i.e., they discard information) to the degree they are formal: and Gödel, Heisenberg, Rosen and a host of others note that there’s always a ghost in the machine. This matters more and more as we face increased complexity.

I will propose that modeling efforts be redirected to the more realistic, logically inconsistent, human ways of understanding ourselves and our interactions with computers. Object-orientation started with Kay’s operational models that would become objects; we see them again in Model-View-Controller-User. Both of these approaches emphasize the need to mix multiple concurrent perspectives (called roles) when analyzing a system. DCI is one paradigm that provides a single computational model while accommodating “tops” for both the left and right brain, and it bridges the abstraction of modeling with concrete implementation. I will argue that design is less about reifying extrinsic, abstract models than it is about creating a “habitable” reality close to the implementation, guided by human mental models. Last, I’ll touch on the kinds of socially constructed complex models as we find in patterns, which were explicitly created out of a dissatisfaction with the ability of formal models to add value.

Comments are closed