BASF & Essentium Develop Strongest 3D-Printed Prosthetic Leg

Very recently, BASF and Essentium together have developed the strongest 3D printed thermoplastic carbon fiber definitive prosthetic socket to enhance the comfortable fits for their patients. First of all, BASF is a German chemical company and Essentium is a team who are working on the bridge between 3D printing and machining. The purpose of this “New definitive sockets” is to increase patient satisfaction and comfort. Since the application of 3D print technology, the money and time would decrease eventually, at the same time, the accuracy can be increased too. They also can use better and more suitable materials on the definitive sockets so that the patient to be taken better care of and reduced their pain.

As 3D print are applying to a larger field, it can help solve more critical problems and make our lives easier. For example, this new definitive socket can help patients to get a better treatment with much lower costs and less time, which means more people who need it can get this treatment so that their living and health condition can change. 3D printing technology has already been applied to education, manufacturing, medicine and etc. I am hoping to see more change in the medical field due to the development of 3D print technology in the future.

Source: https://www.basf.com/us/en/company/news-and-media/news-releases/2018/04/P-US-18-047.html

Turn Sound into Pixels

A research team of MIT introduces PixelPlayer recently. Through combing sound and photo information, machine learning system can recognize the targets and their location, plus their sound without requiring additional manual supervision. This system is called PixelPlayer. This system can be applied widely, for example, recognize special objects or sounds in videos and control. This system put sound recognition and sound editing to a new level. More specificity, PixelPlayer watch large amounts of unlabeled videos and learns how to locate image regions where produce sounds and separate the input sounds into a set of components that represents the sound from each pixel.

I believed the factors that can learn from unlabeled sources and separate them, analysis them and use them is a big progress on sound recognition system because it is already beyond the traditional supervision methods. As it keeps developing, Sound can be separated much better, and a lot of things can be changed, like video and sound editing and making, police system, monitoring system and so on.

 

Source: http://sound-of-pixels.csail.mit.edu/

Google Introduced Semantic Experiences with Talk to Books

Google has many interesting and useful development on the search engine. Two days ago, it introduced a new search engine based on Artificial Intelligent. This program works on the development of natural language understanding. It designed the word vectors that enable algorithms to learn about the relationships between words, and examples of actual language usage. Google’s Semantic Experience shows how this technology can turn impossible into reality. For example, Talk to Books shows an entirely new way to explore books by starting at the sentence level instead of authors of topics level. Its models were trained on a billion conversation-like pairs of sentences, learning to identify what a good response is.  Users can simply ask a question or a statement, then the tools would find sentences in over 100000 books that respond to your question or statement without using any keyword matching techniques and predefined rules.

I think Semantic Experiences allowed a more general population to really “talk” to the books and find out what they really need from all these words of books. Talks to Books allowed us to interact with the book like we interact with people instead of doing a search on single words. This tool understands the books on sentence level which means it is trying to understand the relationship between words and what sentences really mean through machine learning. Another benefit of this tool is that it may also help people discover unexpected authors and titles so that they can have more fresh and innovative experience.

 

Source: https://research.googleblog.com/2018/04/introducing-semantic-experiences-with.html

Facebook VR/AR Affects Marketing Tools

A huge amount of data and information flows in Facebook restlessly every second. Rose Leadem, a freelance writer for Entrepreneur.com, mentioned that “It is a major marketing tool for business.” There are several ways Facebook Spaces could revolutionize the business world with its new Facebook 3D. Industries can probably bring other flashy and fancy elements to their messaging and advertising through Facebook’s VR and AR technique which leads to many different possibilities. Sheila Eugenio, the founder, and CEO of Media Mentions introduce some of these possibilities in her article. First, Bridge the Gap Between Brick and Mortar and E-commerce. While doing online shopping, perhaps customer would never know what they actually get until the products being shipped to their address. However, with a full-on alternative VR universe presents itself, the “brick and mortar stores” would be taken to a whole new meaning. Online shopping will become more convenience and simplicity when you can have 360-degree interaction with the items.

Second, provide Group-Oriented, Immersive Experiences. VR could immerse the user into a world void of all outside interruptions. Companies can market their product or service in an immersive group setting and provide more possibilities for sharing in a group. Third, Increased personalization, Facebook can help brands and users design their personalization and connect them together by adding VR and AR to the mix.

I think it is still too early to tell how AR and VR can change the whole business models. There are still too many possibilities to think of. However, we can already see the change that AR made to our lives. The business models and marketing tools will change eventually as the technology increases.

Source: https://www.entrepreneur.com/article/296471

 

Imaging System that Can Peer Through Fog

Driving under the terrible weather can make it very dangerous no matter who is driving. Especially while there is fog, drivers’ visibility would be decreased dramatically. Recently, MIT researchers have developed a system that, a quote from MIT News Office, “can produce images of objects shrouded by fog so thick that human vision can’t penetrate it. It can also gauge the objects’ distance.”

Unable to handle driving condition has always been a big problem on car development, the principle of this system is to use the number of light particles, or photons from the camera counts to produce a histogram, with the heights of the bars indicating the photon counts for each interval to calculate the distance with physical obstacles.

For myself, driving in the fog is always painful, because it is pretty hard to predict what will happen and understand what is happening even though I have already paid my full attention on the road. This depth-sensing imaging system I believe will really help this realistic issue. Not only for the human driving car, but it also could be a crucial step toward self-driving cars.

Self-driving cars have been developed for years, but there are still many unsolved problems. Safety is one of them. So far, many self-driving systems have simulated the function of human’s eyes and brains on the surrounding recognitions and distance prediction which means these systems also have trouble on the low-visibility situations just like the human driver. This new imaging technique I believe would push the development of driving safety technology forward in the future.

 

Source: http://news.mit.edu/2018/depth-sensing-imaging-system-can-peer-through-fog-0321

Yiyun Gong