Prosthetics with Nerves?

Prosthetics as we know them are numb. They don’t provide any sensory input to the nervous system. To use a prosthetic as a functioning limb takes a lot of practice. However, what if those who used prosthetics could feel them? That’s not to say it still wouldn’t take time and effort to use properly, but it’s predicted that a prosthetic with sensory input would become just as effective as the limb lost. This would mean not losing a sense of touch — a rose petal would feel just as soft on the sensors as on skin — a handful of sand would be just as coarse and weighty.

As the prototype of this prosthetic stands now, the sensors communicate with electrodes that are surgically implanted into the amputee, next to the nerves that used to work with the missing body part. The nerves are stimulated in accordance to the prosthetic’s sensory input.

Through the leader, Jacob Segil’s, new contract from the U.S. Department of Veterans Affairs, the various teams involved are working on improving the model that was sent out in take-home trials. They want to improve the sensors for longer wear happening in the outside world, instead of perfecting it for the pristine conditions of the lab it was created in.

 

Source: https://www.colorado.edu/engineering/2018/11/13/veterans-benefit-new-fingertip-sensors-prosthetic-limbs

US Army Implementing AR

The US Army just partnered with Microsoft HoloLens, hoping to use a modified version of the headsets to aid soldiers both in training and on the battlefield.

The Army plans on making the headsets lighter, improving the low-light detection and thermal detection of people from a greater distance, adding night vision, wireless connectivity to other wearables, and give immediate stats on soldier performance (including the soldier’s health, such as breathing rate and concussion symptoms). They also hope to better incorporate AI and external sensors to help rapid target acquisition, collaborative planning, route planning, soldier position tracking, identifying moving targets and explosives, and “camera-based foreign language translation.” This is starting to sound like a video game.

The headset represents only a portion of the army’s Integrated Visual Augmentation System (IVAS), which would use augmented reality to better soldiers’ close combat skills. Particularly, IVAS would be used in urban and subterranean environments.

The US Army has also said that they plan on having a headset for every soldier. There’s no word about when the new version of the HoloLens will be ready, but it’s clear that it will be specifically for the army and not available to every-day buyers.

Despite this, it’s clear that the HoloLens has a ton of potential in other areas (those more entertainment focused). Here’s a demonstration of the HoloLens being used to create an AR circus performance:

 

Source: https://www.zdnet.com/google-amp/article/microsoft-hololens-to-help-us-army-create-augmented-super-soldiers-in-480m-deal/

Can VR Teach Us Empathy?

There are virtual reality experiences in existence that can put people in the place of a homeless person, a cow going to a slaughterhouse, and others. Creators of these projects hope that the first-person perspective will make the users more empathetic; for example, the director of the slaughterhouse project was pleased to note that people reported eating less meat afterwards.

Of course, it is impossible for a human to know what it’s like to be a cow, and that’s where these empathy projects could fall flat. It doesn’t seem to be teaching empathy (understanding), but instead sympathy (concern). This could still be effective in steering people away from hurtful behavior, however article author Erick Ramirez expresses criticism that the gamification of bad experiences could lead people to think that the experience might not be so bad after all. Using VR in the perspective of a homeless person may find people almost enjoying the game-like aspects of searching and walking around in a virtual world, without any of the fear that comes with the proximity of death and true lack of money.

Virtual reality can be persuasive, but in the end it’s still a manufactured screen strapped to your head. You can always take it off, and that simple dissociation from the experience will act as a subconscious barrier from true empathy.

 

Source: https://theweek.com/articles-amp/804958/virtual-reality-make-more-empathetic

Dead Celebrity Holograms

Modern holograms are a confusing concept. They’re not quite the same as what we’ve come to expect from movies.

Six years ago, the “hologram” that created Tupac Shakur at Coachella wasn’t even a hologram at all, but rather something known as “Pepper’s Ghost,” which used glass and tricks of the light to make a fabricated video seem 3D. That performance seemed new and innovative, since “Tupac” reacted to the crowd’s applause and the actions of the others on stage, but much of that was pre-determined and pre-programmed. “Pepper’s Ghost” itself is a technique that was invented 150 years ago.

There are three big companies in the field of holographic dead celebrities (Hologram USA, Pulse Evolution, and Base Hologram), and all of them seem to approach the creation of holograms in a slightly different way. They also have tremendous legal trouble even gaining the rights to use posthumous celebrities’ mannerisms and character.

None of the three companies is keen to give away its secrets, either. Facial tracking, high-frequency animation, and CGI are likely and in some cases confirmed. In the case of recreating Amy Winehouse, pictures of her from many different angles and longer videos of her talking are being stitched together in a meticulous process. 3D scans can be used on celebrities available (read: alive) to make the hologram happen. Chatbots and AI technology can also be used to generate how the “performers” ad lib and talk.

No matter how the celebrity is created, it seems very unlikely that it’s viewable from a variety of different angles, or able to even move around much. Videos of the holographic act are taken from one angle — straight forward. Despite the chatbots and AI that can be used, the talking ability of the hologram is limited. It can’t react to the crowd any more than the programmers can predict applause. “Thanks for coming,” is pretty much it.

There are other methods of creating holograms, like Light Fields Lab’s work with light fields, but of course their methods are under lock and key considering they plan on being ready to sell services/products in 2020. However, they’re going a different route than that of dead celebrities, looking towards integrating their work into movie theaters and casinos.

Holograms have been a point of interest in culture, given their depiction in movies. Currently the real thing isn’t exactly stacking up, and it begs the question of if we’ll ever see a version quite like Princess Leia’s.

 

 

Sources:

https://www.vox.com/the-goods/2018/10/23/18010274/amy-winehouse-hologram-tour-controversy-technology

https://variety.com/2018/digital/features/light-field-lab-holographic-display-demo-1203026693/

“Living” Jewelry

Researchers, engineers, and designers from three universities (MIT, Stanford, Royal College of Art) came together to make Kino, a robotic piece of jewelry. The pieces operate using magnets and a motor; magnets both under and on top of the clothing attach Kino to the clothing, while a motor powers its motion. Cloth covering the robot can be used to camouflage Kino into the clothing or allow it to stand out in style.

Right now, the little robots can move on top of designated paths on the clothing, change the pattern of the clothing, etch designs, act as a microphone, and drag a hood off of someone’s head when it stops raining (it’s equipped with sensors for environmental conditions to accomplish this). Several other features are possible, such as the ability to connect it to a smartphone in order to play music and make calls.

The makers of Kino see many possibilities in its future. They envision that eventually, the bulkier design will be swapped for something smaller and easier to integrate into clothing. A prototype of their product was featured at the Being Material symposium by MIT’s Centre for Arts Science and Technology (CAST) in April of 2017.

 

It is also worthwhile to mention that this is not the first instance of programmable clothing. FIT graduate Birce Ozkan has created several pieces of the kind, including my favorite example, the feathered Augmentation Jacket. Technology allows the feathers to ruffle up when the wearer faces north.

(click for gif)

 

Kino Source: https://www.dezeen.com/2017/08/08/kino-living-jewellery-roams-across-body-miniature-personal-assistant-mit-media-lab-movie/

Augmentation Jacket Source: https://www.dezeen.com/2016/02/03/birce-ozkan-feathered-augmented-jacket-bird-navigational-skills-fashion-design-wearable-technology/

Augmented Reality with Flashlights – No Cellphones or Glasses

Traditionally, when people think of augmented reality they think of the Google Glass and Snapchat filters. People don’t necessarily think of something as retro and un-isolated as a flashlight, and maybe that’s why it’s perfect.

Lumen, specifically, is an example of flashlight AR. Operating as an ordinary flashlight with a handle, it’s easy to point the beam at any object; it’s what happens after that that’s impressive. A depth-sensing camera and object-recognition algorithm work in tandem to identify the what the light is illuminating. Then, a projector element to the flashlight projects information onto the illuminated object, using specific pixels to operate in 3D space. With the Lumen, it would be possible to point at an engine and be given step-by-step, real-time directions on how to fix it — real-life diagrams.

The design of the user interface makes it very easy to use as well, given that nearly everybody is able to operate a flashlight — from elderly to children. It’s very intuitive.

Lumen also isn’t the only AR project following this form. Similar examples are found in the Faith & Liberty Discovery Center in Philadelphia, and the Lytro camera.

Soon, the complaint of people with their faces in their phones may not even apply. This new idea for AR interface could mean that people will be more connected with both their environment and technology than ever before.

 

 

Source: https://www.fastcompany.com/90156683/the-flashlight-is-a-surprisingly-perfect-interface-for-ar

SalonAI

Yuya Jeremy Ong, Penn State student, looks at his reflection in an interactive mirror

Every year, L’Oreal hosts a competition titled Brandstorm, where international teams of college students compete to modernize and revolutionize the salon experience. This year, a Penn State team became one of the top two US teams in the competition with their submission, SalonAISalonAI is a smart mirror that uses facial recognition technology to suggest hairstyles and cosmetics.

The mirror recommends styles by using data previously collected from model catalogues. By analyzing the distance between different facial features, the mirror’s software determines the face shape of the user. It then compares the face shape to models with the same kind, determining the most popular looks for people with similar features.

SalonAI’s team, consisting of Vincent Trost (data sciences graduate), Vamshi Voruganti (industrial engineering graduate) and Yuya Ong (senior in data sciences), will go on to compete in the world finals on May 17 in Paris.

For other entrepreneurs at Penn State, the group advises to take advantage of the resources provided by Dear Old State. “You are sitting on a gold mine of people, ideas, community, and opportunity waiting to be unearthed,” says Ong.

 

 

Source: https://www.psu.edu/feature/2018/10/03/mirror-mirror

Virtual Art in LA

With apps such as 4th Wall, an entirely new world is unlocked. This world so far is filled with the newest frontier of art, augmented reality. Artists like Carolina Caycedo are using the medium to point out issues like gentrification, environmental issues, previously-untold history, and immigration in a project titled “Defining Line.” The project is comprising of more than eight immersive artworks that will become viewable (via the app) along the L.A. River starting on November 4th.

(Carolina Caycedo’s virtual art, “Curative Mouth”)

“Defining Line,” which works exclusively along the L.A. River, is part of a bigger project titled “Coordinates,” which includes pieces of digital art located at New York City’s Statue of Liberty, Egypt’s Great Pyramid of Giza, and along the U.S./Mexican border in Tijuana.

Virtual art is quickly becoming a great medium for artists looking to provoke thought in climates like politics. As laws don’t apply to virtual art (a form hidden from the naked eye) there is much more opportunity for subversive display. Despite its underground quality, it has the potential to be infinitely more powerful than a plain statue. As put by Nancy Cahill Baker, “a piece that might mean one thing in a white cube, will mean something entirely different over the Rio Grande or Liberty Island” (Furman).

It’s an opportunity for impactful, immersive art all around. Beyond that, it’s accomplishing even more. With the placement of landmarks into virtual reality, the projects leave no carbon footprint like their physical counterparts, and don’t impact humans with changing infrastructure.

In addition, it’s surprising to realize that most of the artists mentioned being involved with large displays of virtual art are women, like Nancy Cahill Baker, Beatriz Cortez, and Carolina Caycedo. Perhaps the semi-quiet yet subversive and bold nature of the medium allows for significant impact on societal issues, coming from the people on the streets and not the people in charge.

 

Source: https://www.carrollcountytimes.com/la-et-cm-4th-wall-vr-art-nancy-baker-cahill-20181029-story.html

Disney Creates Costumes for Guests via AR

Disney, a force to be reckoned with in the augmented reality frontier, has been exploring people augmentation. The most popular example of people augmentation today is Snapchat’s method of face tracking with 3D masks. Like Snapchat, Disney is experimenting with matching RGB images to human poses and shapes. Unlike Snapchat, Disney Research is attempting to track a person’s whole body.

This technology would work with mobile devices, and the idea is to take a picture of someone before letting the program work its magic. The “magic,” titled AR Poser, would then allow a costume to be fitted onto the subject’s body. This happens through the process of first 2D Pose Estimation and then 3D Pose Projection, where the pose a person makes is broken down into the placement of joints to make a 2D skeleton before transferring those positions over to one of Disney’s pre-designed 3D characters. However, pictures and videos of AR Poser do not show the technology applying 3D renders of any famous characters or costumes (yet), just unnamed spacesuit-esque models.

Since the program is new, it comes with limitations. In the subspace where 3D poses are matched to the guest’s picture, there are only twelve poses available for application. This is especially an issue with hands and faces, which can vary in position sufficiently. Disney Researchers are proposing a two-step fit system (one for general body, one for face and hands) to create more accurate results.

Also, these twelve poses mostly apply to adults, who have a more locked set of proportions than children. Because of this, putting a dog in an AR costume is also not possible at this point. Sad.

 

Source: https://www.disneyresearch.com/publication/ar-poser-automatically-augmenting-mobile-pictures-with-digital-avatars-imitating-poses/