Yuvraj Agarwal, University of California San Diego
March 12, 2013
Managing the energy consumption of computing devices is of critical importance given the limited battery lifetime of mobile platforms, and the increasing carbon footprint of mains powered PCs and servers. Traditional mechanisms to save energy such as shutting down (or duty-cycling) either individual subsystems or entire platforms do not work well in practice since they often come at the cost of usability or loss of functionality.
In the first part of my talk, I will show that we can improve energy efficiency through system architectures that seek to design and exploit "collaboration" among heterogeneous but functionally similar subsystems. Using collaboration, individual subsystems or even entire platforms can be shut down more aggressively to reduce their energy usage. I have built several systems that exploit this central idea to demonstrate energy savings across a broad class of devices, and in this talk I will show its application in reducing PC energy usage by 70% on average.
While computing is indeed part of the problem due to its increasing carbon footprint, in the second part of my talk, I will show that computing is also part of the solution, where it can be used to make other systems more energy efficient. In particular, I will focus on sensing and control solutions that we have designed and deployed within enterprise buildings to make them more energy efficient and sustainable. I will show that by using fine-grained occupancy information gathered from battery powered wireless sensors the energy consumption of the HVAC system within a building can be reduced dramatically, saving up to 40% in a test deployment. I will also describe our smart energy meter that can measure the energy usage of plug-loads within a building as well as provide a mechanism to control these loads based on a number of policies.
BIO: Yuvraj Agarwal is a Research Scientist in the Department of Computer Science and Engineering at the University of California San Diego, where he also completed his PhD. His research interests are at the intersection of Systems and Networking and Embedded Systems, and he is particularly interested in research problems that benefit from using hardware insights to build more scalable and efficient systems. In recent years, his work has focused on Green Computing, Mobile Computing, Privacy and Energy Efficient Buildings. In 2012, he was awarded the "Outstanding Faculty Award for Sustainability" given by the UCSD Chancellor. He is a member of the IEEE, ACM and USENIX.
Hosted by Dirk Grunwald# vimeo.com/61761218 Uploaded 45 Plays 1 Like 0 Comments
Geoffrey A. Hollinger, Viterbi School of Engineering, University of Southern California
Tuesday, March 5, 2013
Typically when robots are tasked with gathering information (e.g., in urban search and rescue, environmental monitoring, and aerial surveillance scenarios), human operators must oversee almost every aspect of the operation to ensure completion of the task. Strict human oversight not only makes such deployments expensive and time consuming but also makes some tasks impossible due to the requirement for heavy cognitive loads or superhuman reaction times. These limitations can be mitigated by making the robotic information gatherers autonomous, reducing deployment cost and opening up new domains (e.g., underwater monitoring and space exploration).
However, the problem of optimizing robot motion plans to maximize information is extremely difficult due to the partial observability of the environment and the exponential growth of the planning space in both the length of the mission and the number of robots. With existing solvers, it may take hours or even days to plan the actions of a small team of autonomous robots. The hardness of these problems motivates the development of scalable robot planning algorithms with guarantees that perform near-optimally in practice.
In this talk, I will show how a general framework that unifies information theoretic optimization and physical motion planning makes autonomous information gathering tractable. I will leverage techniques from submodular optimization, adaptive decision making, and active learning to provide scalable solutions in a diverse set of applications such as underwater inspection, urban target search, marine monitoring, and sensing for sustainable energy. The techniques discussed here make it possible for autonomous robots to “go where no one has gone before,” allowing for information gathering in environments previously outside the grasp of human investigation.
BIO: Geoffrey A. Hollinger is a Postdoctoral Research Associate in the Computer Science Department and Viterbi School of Engineering at the University of Southern California. His current research interests are in adaptive information gathering and distributed coordination for robotics and cyber-physical systems. His past research includes multi-robot search at Carnegie Mellon University, personal robotics at Intel Research Pittsburgh, active estimation at the University of Pennsylvania's GRASP Laboratory, and miniature inspection robots for the Space Shuttle at NASA's Marshall Space Flight Center. He has served as a guest editor for the Autonomous Robots journal and on program committees for the IEEE International Conference on Robotics and Automation (ICRA), the Robotics: Science and Systems Conference (RSS), and the International Joint Conference on Artificial Intelligence (IJCAI). He received his Ph.D. (2010) and M.S. (2007) in Robotics from Carnegie Mellon University and his B.S. in General Engineering along with his B.A. in Philosophy from Swarthmore College (2005).
Hosted by Dirk Grunwald# vimeo.com/61632512 Uploaded 86 Plays 0 Likes 0 Comments
Jon Froehlich, University of Maryland, College Park
Thursday, November 30th, 2012
Human behaviour is complex—so much so that it is a fundamental topic of inquiry in fields as diverse as philosophy, economics, sociology, psychology and, my own discipline, human-computer interaction (to name a few). As computing shifts off the desktop and integrates itself into various forms of human life, there is an increasing role for computing to be not just a productivity tool but to fundamentally improve lives—by making us more fit, more informed, and more aware of ourselves and the world around us.
In this talk, I will discuss how sensing and feedback systems not only allow for self-discovery but can also promote and support positive behaviour change. I will focus on designing and evaluating sensing and feedback systems to promote pro-environmental behaviour, while touching on implications for related, popular research areas such as Persuasive Technology, Quantified Self, and Personal Informatics. I will also discuss an important topic I’ve been increasingly contemplating: how to create reusable design knowledge and scaffolding to structure the process of design and evaluation for “technology-mediated behaviour change” applications.
BIO: Jon Froehlich is an Assistant Professor in the Department of Computer Science at the University of Maryland, College Park and a member of the Human-Computer Interaction Laboratory (HCIL) and the Institute for Advanced Computer Studies (UMIACS). His research focuses on building and studying interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personal health and wellness. Jon earned his PhD in Computer Science from the University of Washington (UW) in 2011 where he was a Microsoft Research Graduate Fellow and the UW College of Engineering Research Innovator of the Year. His PhD dissertation entitled Sensing and Feedback of Everyday Activities to Promote Sustainable Behaviors won the UW Graduate School Distinguished Dissertation Award, the Madrona Prize for Research Excellence and Commercial Appeal, and the UW Environmental Innovation Challenge. Jon has over 25 scientific peer-reviewed publications in many top venues including CHI, UbiComp, IJCAI, MobiSys and ICSE garnering a best paper award and two best paper nominations.
For more, see: cs.umd.edu/~jonf/. Hosted by Tom Yeh.# vimeo.com/59264880 Uploaded 212 Plays 2 Likes 0 Comments
Kent Stevens, University of Oregon
Thursday, November 1, 2012
ABSTRACT: It is often presumed that sauropod dinosaurs walked like elephants, despite the fact that they may have shared little in common with modern elephants other than their both being quadrupedal, herbivorous, and large. Likewise, sauropods are imagined to have used their long necks for high browsing, as giraffes often do, and to have held their heads high, as swans often do. Analogical reasoning based on superficial similarities with living animals underlies much of how we imagine dinosaurs, and sometimes leads to wild conjectures regarding sauropods because of their muchness. After a brief review of what we know (and don’t know) about how sauropods looked, moved, and behaved, we will consider the role of computational modelling in trying to under-stand these extinct giants, in particular, how they walked. Fossil trackways provide dramatic evidence for sauropod locomotion, when their giant footprints were imprinted in a compliant surface that subsequently turned to stone. Trackway interpretation usually begins with the taxo-nomic classification of the trackmaker, based on the size, spacing, and morphology of the individual footprints. But attempts to identify which taxon of sauropod created a given track often relies on assumptions about how the creatures walked, which introduces circularity when one subsequently attempts to understand their pattern of locomotion, which requires having first identified the track maker. Trackway interpretation is thus more poorly constrained than often recognized, and would require additional, independent lines of evidence, such as kinematics and dynamics. A few computational studies have introduced such notions through articulated models that simulate the sauropod’s pattern of locomotion. To be sufficiently concrete as to permit animation, these studies must incorporate a large number of rather specific presumptions about the track maker and its walking behaviour. Instead, we have adopted an incremental strategy for the incorporation of such constraints which seeks to maximize what can be concluded from the few-est and most conservative assumptions while attempting to replicate fossil trackways and observe-able behaviours associated with locomotion.
BIO: Professor Stevens received an undergraduate degree in engineering (1969) and a masters in computer science (1971) at UCLA, with computer graphics thesis research on the Fisheye Transform, under Professor Leonard Kleinrock. Since the early 70s, computer graphics has remained a central tool throughout his varied research career. During his Ph.D. studies at the MIT Artificial Intelligence Laboratory under the supervision of Professor David Marr, he used Lisp Machines to generate novel visual stimuli for psychophysical experiments and to visualize the results of perceptual algorithms. After Stevens received his Ph.D. in 1979, he was a Research Scientist position at the AI Laboratory until he joined the faculty of the University of Oregon in 1982 where he continued to use computer graphics primarily to create 3D visual stimuli to explore the strategies underlying monocular and binocular depth perception. His interest in human vision led to a US Patent for a graphical means to measure and correct metamorphopsia (a visual disorder often associated with macular degeneration). He has been an industrial consultant regarding virtual reality, machine vision, and visual fight simulation. Since 1994, Stevens has also applied 3D computer graphics and digital animation techniques towards to create an interdisciplinary link with vertebrate palaeontology. Through the creation and animation of articulated digital models of dinosaur skeletons, Stevens has explored a variety of biomechanical questions that assist in the visualization of dinosaurs and the reconstruction of their appearance, movements, and behaviours.
Professor Stevens has consulted and appeared in BBC, National Geographic, NHK, Discovery Channel, and other video productions, creates digital media and interactive displays that are on exhibit in many museums internationally including New York, Pittsburgh, Los Angeles, and Tokyo, and presents his research in international palaeontological meetings. Co-Sponsored by University of Colorado Museum of Natural History, Sigma Xi, and the Department of Geological Sciences.
Hosted by Liz Bradley.# vimeo.com/59168407 Uploaded 532 Plays 10 Likes 0 Comments
Jeffrey Sarnat, Software Engineer, Twitter
Thursday, November 15, 2012
ABSTRACT: Twitter is one of the most heavily trafficked sites on the internet, with over 140 million active users and serving over 400 million Tweets per day. Historically, nearly all of Twitter's traffic has been served by a monolithic application written in Ruby on Rails, referred to internally as "the monorail." Although Twitter has seen remarkable growth in its six years of existence, scaling this monolithic architecture has proved to be both error-prone and expensive, not only in terms of computational efficiency, but in terms of developer efficiency as well.
More recently, Twitter has begun the process of replacing the monorail with a Services Oriented Architecture (SOA), where the services are JVM processes--mostly written in Scala--that communicate with one-another asynchronously over a network connection. Naively written, asynchronous code is often both tedious to write and difficult to reason about; at Twitter, the asynchronous code implementing Scala services is both simple and beautiful, thanks in large part to an internally-developed, open-source library called Finagle.
In this talk, I will attempt to explain how Finagle's strong grounding in programming language theory has facilitated this transition to an SOA, and how Scala's advanced features allow for a library as expressive as Finagle to have been written in the first place.
BIO: Jeffrey Sarnat (B.S. in Computer Science., CMU 2002, Ph.D. in Computer Science, Yale 2010) is a former programming languages researcher who currently works for Twitter on a team that builds large scale distributed systems. He enjoys candlelit dinners and long walks on the beach. Follow him on Twitter as @Eigenvariable.# vimeo.com/53693402 Uploaded 514 Plays 3 Likes 0 Comments
Computer Science Colloquia
The University of Colorado Boulder Department of Computer Science holds colloquia throughout the fall and spring semesters. These colloquia, open to the public, are typically held on Thursday afternoons, but sometimes occur at other times as well. Recordings…
The University of Colorado Boulder Department of Computer Science holds colloquia throughout the fall and spring semesters. These colloquia, open to the public, are typically held on Thursday afternoons, but sometimes occur at other times as well. Recordings are typically posted the following week.
If you would like to receive email notification of upcoming colloquia, please subscribe to our Colloquia Mailing List (colorado.edu/cs/colloquia/colloquia-mailing-list).
Browse This Channel
More stuff from “Computer Science Colloquia”
Heads up: the shoutbox will be retiring soon. It’s tired of working, and can’t wait to relax. You can still send a message to the channel owner, though!