Friday, September 25, 2015

TerraSwarm funded paper "Robust Online Monitoring of Signal Temporal Logic" Receives Best Paper Award at Runtime Verification '15, Vienna

The paper, "Robust online Monitoring of Signal Temporal Logic," received best paper award at Runtime Verification '15 in Vienna.  

The paper states: "This work was supported in part by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA, by NSF Expeditions grant CCF-1139138, and by Toyota under the CHESS center at UC Berkeley."


Jyotirmoy V. Deshmukhh, Alexandre Donze, Shromona Ghosh, Xiaoqing Jin, Garvit Juniwal, Sanjit A. Seshia. Robust Online Monitoring of Signal TemporalLogic, Runtime Verification '15, Vienna, September 22, 2015.


Signal Temporal Logic(STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course (MOOC) setting. We show that savings in computationally expensive simulations far out- weigh any overheads incurred by an online approach. 

Thursday, September 24, 2015

TerraSwarm researcher Alex Halderman is one of PopSci's Brilliant 10

University of Michigan Professor and TerraSwarm-funded researcher J. Alex Halderman has been named one of PopSci's Brilliant 10.  The article "Brilliant 10: Alex Halderman Strengthens Democracy Using Software," discusses Halderman's 2010 efforts surrounding a mock Washington D.C. election where researchers were invited to break into the electronic voting system.  Halderman and his group were able to take complete control of the system, which they directed to play the Michigan fight song each time a vote was cast.

The article also covers TapDance, software that allows citizens of countries with restricted firewalls to bypass government censors.  TapDance is described in the following paper

* Eric Wustrow, Colleen Swanson, Alex Halderman. TapDance: End-to-Middle Anticensorship without Flow Blocking, Usenix Security 2014, 20, August, 2014.

Abstract: In response to increasingly sophisticated state-sponsored Internet censorship, recent research has proposed a new approach to censorship resistance: end-to-middle proxying. This concept, developed in systems such as Telex, Decoy Routing, and Cirripede, moves anticensorship technology into the core of the network, at large ISPs outside the censoring country. In this paper, we focus on two technical barriers to the deployment of end-to-middle proxy designs-- the need to selectively block flows, and the need to observe both directions of a connection-- and we propose a new construction, TapDance, that avoids these shortcomings. To accomplish this, we employ a novel TCP-level technique that allows the anticensorship station at an ISP to function as a passive network tap, without an inline blocking component. We also apply a novel steganographic encoding to embed control messages in TLS ciphertext, allowing us to operate on HTTPS connections even with asymmetric flows. We implement and evaluate a proof-of-concept prototype of TapDance with the goal of functioning with minimal impact on normal ISP operations.

This work was supported in part by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation pro- gram sponsored by MARCO and DARPA.

Note that in 2014, University of Michigan Professor and TerraSwarm-funded research Prabal Dutta was named one of the Brilliant 10.  See "The Brilliant Ten: Prabal Dutta Powers The Internet of Things."

Madhur Behl wins the Best-in-Session award (IoT Systems session) at the 2015 SRC TECHCON

Madhur Behl (final year PhD at UPenn) won the Best-in-Session award for the IoT Systems session at the 2015 SRC TECHCON on September 22, 2015. This is for his TerraSwarm work on Data-driven Demand Response Recommender System (DR-Advisor), titled "Sometimes, Money Does Grow on Trees".

Last year,  Zhihao Jiang (also final year PhD at UPenn) won the Best-in-Session award for his TerraSwarm work on Formal Verification of Closed-loop Medical Devices.

Wednesday, September 16, 2015

TerraSwarm Robot Swarm, Connected Sensors, Video Summarization and Audio Machine Learning Technology at DARPA Wait, What? Conference

TerraSwarm participated in the DARPA Wait, What? technology forum held in St. Louis, September 9-11, 2015.

The Wait, What? Demo illustrated TerraSwarm infrastructure innovations on robot swarms and connected sensors. The demonstration included cooperating robots solving more than one problem, and on-the-fly re-tasking to move from one problem to the next.

At the DARPA Wait What? conference, the TerraSwarm team from five universities integrated five exciting and best-in-class technologies to showcase the types of applications enabled by the TerraSwarm research center. The demo focused on a robot delivery service where a team of Scarab robots (designed, built, and programmed at the University of Pennsylvania) delivered snacks to onlookers at the touch of a button. The user interface for the demo was a smartphone app, running on unmodified smartphones. After making a selection, the system dispatched a robot carrying the snack directly to the attendee, even as the attendee walked around. While the robot's main goal was the delivery application, to demonstrate how the robots could be repurposed in real-time, a context-aware machine learning application ran in the background that could interrupt the robots in response to an event. Finally, one of the robots simultaneously performed a surveillance task by carrying a video camera. The video stream was fed to a video summarizer which extracted the most interesting and novel clips from the stream in real-time. Instead of watching an hour long video, examining what the robot had seen was compressed to a minute long summary of several interesting clips.

This composition of various technologies highlights what can be done when independent systems work together. Applications in smart manufacturing, warehouse management, in-the-field delivery, disaster relief, and monitoring and security can all benefit from these services interoperating.

The demo centered around our smartphone app which both presented to visitors the snack options the robots could deliver, and also ran an indoor localization service known as ALPS (developed by a team at Carnegie Mellon University) in the background. ALPS uses fixed anchor nodes (which were positioned around the demo space) which periodically transmit ultrasonic chirps to localize off-the-shelf smartphones. While the chirps are at a frequency above the human hearing range, they are still audible to the microphone circuitry in phones, and based on when the phone receives the chirp and the speed of sound, the ALPS system can calculate the location of the phone.
Figure 1. Students using the app to request a delivery.

The attendee's snack request and current location are fed to the application coordinator which is an Accessors based application running on the Ptolemy software platform (both developed by a team at UC Berkeley). Ptolemy is a visual, actor-oriented programming environment where applications are composed by connecting the inputs and outputs of the system blocks together. Accessors enable those blocks to represent external, real-world devices, such as the robots or the phones running ALPS. Ptolemy provides a central point to both describe and implement the application, as well as manage the data flowing through the system.
Figure 2. Scarab Robot with camera used for video summarization and candy. (source: Ilge Akkaya)

The robots operate semi-autonomously, and use the ALPS position updates as waypoints to complete their deliveries. With a known map of the space, the robots receive their next waypoint, calculate a path to that goal, and navigate that path independently. If any obstacles present themselves, including other robots, people walking around, or changes in the environment, the robots both detect the change and navigate around the obstruction.
Figure 3. The map showing the direction of robots. (source: Ilge Akkaya)

Figure 4. Scarab robots in motion, making deliveries. (source: Ilge Akkaya)

While the robots are making deliveries, a microphone is passively listening in the background and running a machine learning model trained to detect applause (as setup by the UC Berkeley team). The relevant features of the audio stream are extracted locally, and then processed by the GMTK machine learning toolkit (developed at the University of Washington) to determine if there was applause in the space. When applause is detected, the robots are interrupted from their deliveries, spin in place, and then continue serving snacks. This reconfiguration is a placeholder for a more serious event, such as the robots being able to re-task themselves to respond when a disaster is detected.

The robot that is filming its environment is streaming the video feed to a video summarization tool (developed by a team at the University of Washington). The video summarizer is, in real-time, analyzing the stream for interesting and novel clips that best describe the video as a whole. It aggregates these clips as a summary of the video. Essentially, the information inside the video is preserved, but the length is significantly reduced, making an originally intractable problem---monitoring the video feeds from a swarm of robots---feasible.

These systems are the cutting-edge technologies and research occurring inside the TerraSwarm research center. To demonstrate how they can be used connected to create otherwise impossible applications, the team at the University of Michigan focused on the systems engineering aspect: architecting the application, bringing together the components, defining interfaces between them, and ensuring the demo worked as expected. By successfully incorporating state-of-the-art research projects from multiple universities, the Wait What? demo showcased the true potential of multi-institution research collaborations like TerraSwarm.

(Source of text: Brad Campbell)

In addition, Professor Alberto Sangiovanni-Vincentelli gave the following presentation:
Design Technology for the Trillion-Device Future, September 10, 2015.

See also: Rebecca Boyle, "Wait, What? The Most Amazing Ideas From DARPA's Tech Conference," September 15, 2015,

Saturday, September 12, 2015

IoT Lacks Tools: DARPA can play a role in building the infrastructure

On September 10, 2015, TerraSwarm researcher Professor Alberto Sangiovanni-Vincentelli spoke at DARPA's Wait What Conference about "Design Technology for the Trillion-Device Future" (slidesvideo).   His talk was covered by Rick Merritt in the eeTimes article, "IoT Lacks Tools, Says EDA Vet."

In the talk, Professor Sangiovanni-Vincentelli suggested that IoT tools should be similar to the EDA tools used to design large chips.  However IoT is far more heterogeneous and in IoT the notion of time must be treated as a first class citizen.

Professor Sangiovanni-Vincentelli was quoted as stating "The end devices have to cost a few cents each, so how can you make money from that? Another question is who will build the infrastructure…DARPA can play a role as an independent partner to coalesce people and help build the infrastructure."

The talk was also covered in by Sharon Guadin for ComputerWorld in an article titled: "Get ready to live in a trillion-device world" who quotes Sangiovanni-Vincentelli as stating: "The brain-machine interface will have sensors placed in our brains, collecting information about what we think and transmitting it to this complex world that is surrounding us," said Sangiovanni-Vincentelli. "I think I'd like to have an espresso and then here comes a nice little robot with a steaming espresso because I thought about it."