The device could help workers locate objects to fulfill e-commerce orders or identify parts to assemble products. — ScienceDaily

MIT researchers have built an augmented reality headset that gives the wearer X-ray vision.

The headset combines computer vision and wireless perception to automatically locate a specific item hidden from view, perhaps within a box or under a pile, and then instruct the user to retrieve it.

The system uses radio frequency (RF) signals, which can pass through common materials such as cardboard boxes, plastic containers, or wooden dividers, to locate hidden items labeled with RFID tags, which reflect signals sent by an RF antenna.

The headset guides the person as they walk through a room towards the location of the item, which is shown as a transparent sphere in the augmented reality (AR) interface. Once the item is in the user’s hand, the headset, called X-AR, verifies that they have selected the correct object.

When the researchers tested X-AR in a warehouse-like environment, the headset could locate hidden items within 9.8 centimeters, on average. And it confirmed that users selected the correct item with 96 percent accuracy.

X-AR could help e-commerce warehouse workers quickly find items on cluttered or boxed shelves, or by identifying an exact item for an order when there are many similar items in the same bin. It could also be used in a manufacturing facility to help technicians find the right parts to assemble a product.

“Our whole goal with this project was to build an augmented reality system that allows you to see invisible things — things that are in boxes or around corners — and in doing so, it can guide you in direct them and let you truly. to see the physical world in ways that were not possible before,” says Fadel Adib, who is an associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the Media Lab, and the senior author of a paper on X-AR.

Adib’s co-authors are research assistants Tara Boroushaki, who is the lead author of the paper; Maisie Lam; Laura Dodds; and former postdoc Aline Eid, now an assistant professor at the University of Michigan. The research will be presented at the USENIX Symposium on Design and Implementation of Networked Systems.

Adding an AR headset

To create an augmented reality headset with X-ray vision, the researchers first had to wear an existing headset with an antenna that could communicate with RFID-tagged items. Most RFID localization systems use multiple antennas spaced meters apart, but the researchers needed a single lightweight antenna that could achieve high enough bandwidth to communicate with the tags.

“One big challenge was to design an antenna that would fit on the headset without covering any of the cameras or hindering its operations. This is quite important, since we need the data to obviously use the visor,” says Eid.

The team took a simple, lightweight loop antenna and experimented by tapering the antenna (gradually changing its width) and adding gaps, both techniques that increase bandwidth. Since antennas typically work outdoors, the researchers optimized them to send and receive signals while attached to the earcup’s shield.

Once the team had built an effective antenna, they focused on using it to localize RFID-tagged items.

They leveraged a technique called synthetic aperture radar (SAR), which is similar to how airplanes image objects on the ground. X-AR takes measurements with its antenna from different vantage points as the user moves around the room, then adds those measurements together. In this way, it acts like an antenna array where measurements from multiple antennas are combined to localize a device.

X-AR uses visual data from the headset’s self-tracking capability to build a map of the environment and determine its location within that environment. As the user walks, it calculates the probability of the RFID tag at each location. The exact location of the tag will have the highest probability, so it uses this information to zero in on the hidden object.

“Although it was a challenge when we were designing the system, we found in our experiments that it works well with natural human movement. Because people move around a lot, it allows us to take measurements from many places different and accurately localize an item,” says Dodds.

Once X-AR has located the item and the user picks it up, the headset must verify that the user has grabbed the correct object. But now the user is standing still and the headset antenna is not moving, so it cannot use SAR to localize the tag.

However, as the user picks up the item, the RFID tag moves with it. X-AR can measure the motion of the RFID tag and leverage the headset’s hand tracking capabilities to localize the item in the user’s hand. It then checks that the tag is sending the correct RF signals to verify that it is the right thing.

The researchers used the headset’s holographic visualization capabilities to display this information to the user in a simple way. When the user puts on the headset, they use menus to select an item from a database of tagged items. After the object is localized, it is surrounded by a transparent sphere so that the user can see where it is in the room. The device then projects the path for that item in the form of a foot on the floor, which can be updated dynamically as the user walks.

“We took out all the technical aspects so we can provide a clear and seamless user experience, which would be especially important if someone were to run this in a warehouse environment or a smart home,” says Lam.

Test the headset

To test X-AR, the researchers created a simulated warehouse by filling shelves with cardboard boxes and plastic bins, and placing RFID-tagged items inside.

​​​​​​They found that X-AR can guide the user towards a targeted item with less than 10 centimeters of error — meaning the item was located on average less than 10 centimeters from where ordered by X-AR to the user. Baseline methods that the researchers tested had a median error of 25 to 35 centimeters.

They also found that it correctly confirmed that the user had picked up the correct item 98.9 percent of the time. This means that X-AR is able to reduce picking errors by 98.9 percent. It was even 91.9 percent accurate when the item was still inside a box.

“The system does not need to visually see the item to verify that you have selected the correct item. If you have 10 different phones in similar packaging, you may not be able to tell the difference between them, but it is possible lead. you’re still going to pick up the right one,” says Boroushaki.

Now that they have demonstrated the success of X-AR, the researchers plan to explore how different sensing methods, such as WiFi, mmWave technology, or terahertz waves, could be used to improve their visualization and interaction capabilities. They could also improve the antenna so that its range can exceed 3 meters and extend the system for use by multiple coordinated headsets.

“Because nothing like this exists today, we had to figure out how to build a new type of system from start to finish,” says Adib. “Really, what we’re doing is a framework. There are a lot of technical contributions, but it’s also a blueprint for how you would design an AR headset with X-ray vision in the future.”

Video: https://youtu.be/bdUN21ft7G0

Leave a Reply

Your email address will not be published. Required fields are marked *