Thursday
9
28

VIP Halcyon Dialogue Robotics Showcase to Livestream

1:00PM | Halcyon, 3400 Prospect St NW, Washington, DC 20007
Event Information

In partnership with AAAS, we hosted an exlcusive Facebook Live event celebrating the innovations in the fields of AI and robotics with the release of the 2016-2017 Halcyon Dialogue series report, Shaping Robotics Policy for the 21st Century.  

We listened to leading industry experts, congressional leaders and staff, leading technology companies, and influential policymakers and media figures as they discussed our future living with robots and demonstrated the latest in robotics. 

Weren't able to tune in? Check out the archived Facebook Live video on @HalcyonInspiresAnd, make sure to continue the conversation #HalcyonRobot. 

Read the 2016-2017 Halcyon Dialogue series report here

Meet the Panelists 
Panel 1 - Emerging Robotics Technology: Living with Robots 
Panel 2 - Policy Opportunities in the Robotics Age 
 


Eager to learn more about the robots who visited Halcyon? Check out our robotic guests: 

Amazon Web Services will demonstrate how to use a fully managed IoT service in the cloud (AWS IoT) as well as an Image Recognition service (AWS Rekognition) to assist in emergency response scenarios. The demonstration will include an IoT connected search device which will be fitted with a camera and LED indicators. Conceptually these devices could be mounted on an unmanned vehicle (drone or land based) or an emergency response vehicle. AWS will have a web interface that allows operators to enter search terms (e.g. fire, person etc.) for the search device to detect. When the search device is activated it will transmit images and location to an image recognition system (running on AWS cloud); service will analyze the incoming images, extract data from it and compare against the list of search terms that were entered into the system. When the system finds a match, operator is notified via the web console to review the image. If the image is a positive match the operator can put the search device into a beacon mode where it will flash its LED’s to notify nearby emergency responders to investigate the area.

Calypso for Cozmo ("Calypso" for short) is a new robot intelligence framwork for the revolutionary Cozmo robot by Anki. Calypso allows anyone age 8-80 to program Cozmo using computer vision, speech recognition, and artificial intelligence algorithms. Calypso was developed by Professor David Touretzky of Carnegie Mellon University. During this live showcase, Dr. Touretzky will demonstrate how Calypso's innovative user interface supports "transparent robot intelligence". #Cozmo #CozmoMoments

EMIEW3 is a humanoid robot with an enhanced degree of autonomy based on EMIEW and EMIEW2. A "remote brain" consisting of control functions deployed on the cloud and a robot monitoring system, forms the robotics IT platform, which enables EMIEW3 to support customer and guidance services. #EMIEW3 #HitachiRobot #Hitachi

This exhibit will highlight robotic systems developed in the Laboratory for Computational Sensing and Robotics at Johns Hopkins University, including new microsurgical robot developed to assist surgeons in minimally invasive applications of otolaryngology, neurosurgery and similar critical fields. The system can eliminate hand tremor and enforce virtual safety barriers to allow surgeons to perform high-stress cases with improved confidence.  We will demonstrate a research version of this robot.  A clinical version of this robot is being developed by Galen Robotics, Inc. 

The Perceptive Pixel (PPI) by Microsoft 55" Touch Device is a touch-sensitive computer monitor capable of detecting and processing a virtually unlimited number of simultaneous on-screen touches. It has 1920 x 1080 resolution, adjustable brightness of up to 400 nits, a contrast ratio of up to 1000:1, and a display area of 47.6 x 26.8 inches. An advanced sensor distinguishes true touch from proximal motions of palms and arms, eliminating mistriggering and false starts. With optical bonding, the PPI by Microsoft 55" Touch Device virtually eliminates parallax issues and exhibits superior brightness and contrast. And it has built-in color temperature settings to accomodate various environments and user preferences. #Surface #SurfaceFam #MagicWall

John Hopkins University Applied Physics Lab (#JHU or #JHUAPL) will present Intelligent Systems that can make decisions under uncertainty and take action when authorized. A legion of trusted, intelligent systems able to autonomously sense, think, decide and act, while interfacing with human teammates on critical applications. JHU/APL has developed Think/Decide algorithms that allow robots to operate “in the wild”. Specifically, their team has focused on enabling a heterogeneous team of robots to:

  • perceive and describe novel objects they encounter
  • collaborate autonomously on mapping, navigating, and manipulating a challenging environment
  • interface with human teammates using natural language

Daniel Turner and Stephen Carter, the co-Founders of TRAXyL and two of Halcyon Incubator's own Cohort 7 fellows, install optical fiber communications on the fly. Using their latest prototype, the FiberTRAXtor, to test their patented installation method, the final product blends in directly with the road surface and allows for digital communications without compromising road surface integrity.