Posted in | News | Sensors General

Study Results Pave Way for Large-Scale Sensor Networks

Scientists at Disney Research and the University of Washington (UW) have shown that a network of energy-harvesting sensor nodes equipped with onboard cameras can automatically determine each camera's pose and location using optical cues.

This capability could help to enable networks of hundreds or thousands of sensors that could operate without batteries or external power and require minimal maintenance. Such networks could be part of the Internet of Things (IoT) in which objects can communicate and share information to create smart environments.

Previous work at UW has produced battery-free RFID tags called WISPs with enhanced capabilities such as onboard computation, sensing, and image capture capabilities. WISPs operate at such low power that they can scavenge the energy needed for operation from radio waves. The new work shows that these WISPs with onboard cameras, or WISPCams, can use optical cues to figure out where they are located and the direction in which they are pointed. The ability of each node to determine its own location makes deployment of autonomous sensor nodes easier and the sensor data they produce more meaningful.

"Once the battery free cameras know their own positions it is possible to query the network of WISPCams for high level information such as all images looking west or sensor data from all nodes in a particular area," said Alanson P. Sample, a research scientist with Disney Research who previously was a post-doctoral researcher on the UW team that developed the WISP platform and the WISPCam.

Future iterations of this RFID-based sensing technology has the potential to enable low cost and maintenance-free IoT applications by eliminating the need for external wiring or regular battery replacement. Networks of hundreds or thousands of these sensors could be used to monitor the condition of infrastructure such as bridges, industrial equipment monitoring, and home security monitoring.

Sample and his collaborators - Joshua Smith, associate professor of computer science and engineering, at the University of Washington and his students Saman Naderiparizi, Yi Zhao and James Youngquist - presented their findings at Ubicomp 2015, the International Joint Conference on Pervasive and Ubiquitous Computing, in Osaka, Japan.

In this study, the researchers addressed two related networking issues - how to design sensors that can determine their position and pose, how to reduce the amount of data that needs to be transmitted over the network and how to better manage the small amount of power that their RF antennas capture.

The researchers used an image processing technique called Perspective-n-Point (PnP) to determine location and pose. This involves capturing an image of an object and then comparing it with a second image in which four LEDs in a known configuration illuminate the object. Using this technique, the cameras were able to estimate their position to within a few centimeters. In their experimental setup, the researchers used four WISPCams and a separate WISP with LEDs, but Sample noted that the LEDs could be incorporated into each WISPCam.

Rather than send all of these images to a central computer- a laborious chore in networks that might include hundreds or thousands of camera sensors and one that would place great demands on the low-power devices - the researchers showed that innovative circuitry and firmware enabled the initial processing necessary for localization to be performed onboard each sensor.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.