WT | News

Discover our incredible news!

News

X
Text dummy
Text Link

Scientists Control Robot with Apple Vision Pro App

An app for the Apple Vision Pro mixed reality headset that allows users to operate a robot.

Researchers have developed an app for the Apple Vision Pro mixed reality headset that allows users to operate a robot exclusively with hand and head movements. It might be used to remotely operate devices in a variety of situations, such as pulling practical jokes or navigating a disaster area.

Younghyo Park, the app's developer and MIT doctorate student, posted a video of the application in use on X, formerly known as Twitter. The MIT graduate student and co-author of the study, Gabe Margolis, walks viewers through the operation of the app in the video below. You can see how he uses his hands and body to operate the four-legged robot, reports TomorrowsWorldToday.

Margolis gives the robot instructions to use its gripper to open a closed door and let herself in while showcasing how the software functions. Furthermore, Margolis directs the robot to retrieve a piece of trash and dispose of it in the trash. In another scene in the video, the robot imitates Margolis's movements by bending down when he does.

Related Apple Vision Pro Used in Spinal Surgery

Although the Apple Vision Pro has many benefits, it is not without its drawbacks. There are limitations in confined areas like elevators and moving cars because the gadget depends on movements. In order to guarantee precise tracking, users also need to be mindful of hand location. For instance, when a user's hands are at their waist or by their sides, tracking is restricted.

Still, scientists think that fusing robots with the Apple Vision Pro has a lot of potential. According to Park and Margolis' paper, using the Apple Vision Pro longer yields more data that can be used to train robots to move. It is said that more functionality for robotic applications are planned.

Text Link

Stretchy, Electrically Conductive Material Hardens Upon Impact

A flexible, electrically conductive polymer that increases the robustness of wearable technologies.

Researchers at the University of California, Merced have created a flexible, electrically conductive substance that may eventually increase the robustness of wearable technology, such as smartwatches.

The novel material demonstrates adaptive durability, which means that it gets stronger in response to strain or impact. The material was, strangely enough, inspired in the kitchen.

Yue (Jessica) Wang, the project's primary investigator, observes that the mixture travels easily through a mixing spoon when cornstarch and water are combined slowly. You get a different result if you take the spoon out and try to shove it back in. According to Wang, "it's like stabbing a hard surface," and the spoon does not retract.

Related Printed and Flexible Sensor Market Poised to Grow

Wang's team wanted to create a solid, electrically conductive material with this intriguing feature.

The team had to determine the ideal mixture of conjugated polymers—long, conductive molecules with a spaghetti-like shape—in order to achieve their objective. The majority of flexible polymers shatter when struck hard, quickly, or repeatedly.

An aqueous solution containing four polymers was initially used by the researchers: shorter polyaniline molecules, spaghetti-like poly(2-acrylamido-2-methylpropanesulfonic acid), and a conductive mixture known as poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS).

Along the process, they made adjustments to the formula to increase adaptability and conductivity. For example, adding 10% more PEDOT:PSS increased the mixture's conductivity and adaptive durability.

The group also experimented with incorporating tiny molecules into the mixture, observing the effects of each addition on the properties of the polymers. In the end, additives with positively charged nanoparticles best enhanced adaptive functionality, reports TechSpot.

"Adding the positively charged molecules to our material made it even stronger at higher stretch rates," says Di Wu, a postdoctoral researcher in Wang's lab.

Integrated bands and rear sensors for smartwatches that might readily survive the demanding environment of a person's daily life on their wrist are examples of practical uses. Additionally, the flexible material may find use in the medical industry, where it might be integrated into wearable medical devices such as glucose monitors or cardiovascular sensors.

In order to illustrate the material's potential for usage as a prosthetic, Wu and colleagues even modified a previous version of the material that is appropriate for 3D printing and produced a facsimile of a human hand.

"There are a number of potential applications, and we're excited to see where this new, unconventional property will take us," Wang said.

Text Link

Universal Exoskeletons for Everyone

Exoskeletons that may shield workers from excruciating accidents and aid stroke victims.

Although the term "exoskeleton" may conjure up futuristic visions from science fiction films like as Alien and Avatar, the technology is gradually approaching reality. Exoskeletons have been tested to help soldiers carry heavy packs for extended periods of time, minimize injuries in auto manufacturers, and even help Parkinson's patients maintain their mobility.

Researchers are working on real-life robotic assistance that could protect workers from painful injuries and help stroke patients regain their mobility. So far, they have required extensive calibration and context-specific tuning, which keeps them largely limited to research labs.

Mechanical engineers at Georgia Tech may be on the verge of changing that, allowing exoskeleton technology to be deployed in homes, workplaces, and more, reports EurekaAlert.

A team of researchers in Aaron Young’s lab have developed a universal approach to controlling robotic exoskeletons that requires no training, no calibration, and no adjustments to complicated algorithms. Instead, users can don the “exo” and go.

Their system uses a kind of artificial intelligence called deep learning to autonomously adjust how the exoskeleton provides assistance, and they’ve shown it works seamlessly to support walking, standing, and climbing stairs or ramps. They described their “unified control framework” March 20 in Science Robotics.

“The goal was not just to provide control across different activities, but to create a single unified system. You don't have to press buttons to switch between modes or have some classifier algorithm that tries to predict that you're climbing stairs or walking,” said Young, associate professor in the George W. Woodruff School of Mechanical Engineering.

Machine Learning as Translator

Most previous work in this area has focused on one activity at a time, like walking on level ground or up a set of stairs. The algorithms involved typically try to classify the environment to provide the right assistance to users.

The Georgia Tech team threw that out the window. Instead of focusing on the environment, they focused on the human — what’s happening with muscles and joints — which meant the specific activity didn’t matter.

Related New Exoskeleton Helping Disabled Walk, Stand

With the controller delivering assistance through a hip exoskeleton developed by the team, they found they could reduce users’ metabolic and biomechanical effort: they expended less energy, and their joints didn’t have to work as hard compared to not wearing the device at all.

In other words, wearing the exoskeleton was a benefit to users, even with the extra weight added by the device itself.

“What’s so cool about this is that it adjusts to each person's internal dynamics without any tuning or heuristic adjustments, which is a huge difference from a lot of work in the field,” Young said. “There's no subject-specific tuning or changing parameters to make it work.”

The control system in this study is designed for partial-assist devices. These exoskeletons support movement rather than completely replacing the effort.
The team, which also included Molinaro and Inseung Kang, another former Ph.D. student now at Carnegie Mellon University, used an existing algorithm and trained it on mountains of force and motion-capture data they collected in Young’s lab. Subjects of different genders and body types wore the powered hip exoskeleton and walked at varying speeds on force plates, climbed height-adjustable stairs, walked up and down ramps, and transitioned between those movements.

And like the motion-capture studios used to make movies, every movement was recorded and cataloged to understand what joints were doing for each activity.

The Science Robotics study is “application agnostic,” as Young put it. Yet their controller offers the first bridge to real-world viability for robotic exoskeleton devices.

Imagine how robotic assistance could benefit soldiers, airline baggage handlers, or any workers doing physically demanding jobs where musculoskeletal injury risk is high.

Text Link

April 2024: The Revolution Against Chronic Tremors

The GyroGlove revolutionizes the lives of many people suffering from tremors.

In the ever-evolving world of medical wearable technologies, there is one technology that stands out, the GyroGlove. Approximately 200 million people worldwide suffer from lifelong, incurable hand tremor diseases such as Parkinson's disease.

That's why GyroGear has designed the world's most advanced hand tremor stabilizer, which is the first mechanical gyroscope medical device. The GyroGlove uses a powerful gyroscope to stabilize the user's hand so that tremors can be slowed down. This can significantly improve the user's quality of life.

The GyroGlove is a major innovation in this field, as current therapies are ineffective, drugs have significant side effects and surgical interventions are very risky.

About GyroGear

GyroGear was founded in 2016 with the mission to alleviate tremor problems and build a future free of tremor restrictions. GyroGear has also won seven Innovation Awards at CES 2024, including awards for Accessibility and Aging Tech, Digital Health and Wearable Technologies.

Text Link

Singapore Wearables Startup Raises $5 Million

A Medical technology hopes to increase the availability of its rehabilitation services in the US.

With a new US$5 million fundraising, a medical technology offshoot from Nanyang Technological University (NTU) Singapore hopes to increase the availability of its rehabilitation services in the US.

The startup, named Synphne, seeks to enhance the management of neurological disorders and strokes.

Related Cala Health's Neurostimulation System Treats Parkinson's

The company's main offering is a wearable gadget that helps those recovering from brain injuries and diseases like strokes. It intends to target markets such as Singapore and India in addition to the US.

HOW IT WORKS

The platform measures an individual's brain and muscle activity, which are also shown in near real-time to their therapists through in-person sessions or remotely guided tele-sessions. This data will allow them to personalize the patient's therapy by appropriate difficulty level, speed, and duration.

SynPhNe can also help enhance cognition and balance for some brain-muscle dysfunctions by mimicking how babies learn. Additionally, it can potentially improve hand function by up to 70% within 6-8 weeks and train children with learning difficulties to improve reading, comprehension, and writing within eight weeks, reports MobiHealthNews.

An affiliate of the Nadathur Group, Event Horizon Technologies, is one of the company's investors in the series A round. Nadathur Raghavan, a co-founder of Infosys, a consulting and digital services company, runs the organization as his family office.

The National Research Foundation, the National Medical Research Council, and the Singapore-MIT Alliance for Research and Technology are just a few of the Singaporean research and enterprise initiatives that have supported Synphne since it was first established under the NTUitive program.

Text Link

Shape-Shifting Dress Morphs with Wearer’s Style and Body

Scientists have created new clothing that adapts to changes with the aid of smart technology.

The fashion business is continually changing in terms of trends and designs. Our bodies too undergo continuous modification. An innovative clothing created by MIT adapts to those changes with the aid of smart technology. The outfit changes to reflect your style, figure, and the ever-evolving fashion trends.

Fresh out of MIT’s Architecture Department, Sasha McKinlay sees it as a sustainable fashion revolution. “We’re trying to give people a way to express themselves through clothes that last,” she says, “Not just a season, but years.”

Related Smart Hat Senses Traffic Light Change

“It’s a human need. But there’s also the human need to express oneself. I like the idea of customizing clothes in a sustainable way. This dress promises to be more sustainable than traditional fashion to both the consumer and the producer.”

For the creation of the dress, McKinlay collaborated with the Ministry of Supply, a high-tech fashion company. The dress incorporates a number of various technologies to ensure that it fits a person perfectly. Heat-activated yarn is used as the primary technology material. The garment can be worn in many ways thanks to the heat-activated yarn. It can go from pintucks to pleats or tightened waste, for instance. This allows the garment to be customized to fit an individual's style, reports TomorrowsWorldToday.

The founder of MIT's Self-Assembly Lab believes that the fashion business has a few issues. According to Skylar Tibbits, mass-produced clothing is not distinctive, but bodies are. Tibbits said, “Everyone’s body is different,” and “Even if you wear the same size as another person, you’re not the same.” “Fast fashion” is also a growing issue where clothes are made cheap, worn briefly, and then thrown out.

They discovered a way to use heat to make sure the dress fits people of different shapes and sizes with the assistance of robotics specialist Danny Griffin. The "smart" yarn in the dress is activated by heat, allowing it to change styles. To do this, they employ a robot-guided heat gun. “It’s like tailoring performed by a machine,” Griffin explained, “Except you can redo it whenever you want a fresh look.”

The dress may be made to change by applying heat. It's a creative, eco-friendly piece of apparel that maintains its "fashion" without sacrificing sustainability. Tibbits said, “Right now when people purchase a piece of clothing it has only one ‘look.’ But, how exciting would it be to purchase one garment and reinvent it to change and evolve as you change or as the seasons or styles change? I’m hoping that’s the takeaway that people will have.”

Text Link

Apple Vision Pro Used in Spinal Surgery

Apple's Vision Pro assisted a surgical team at a UK hospital in performing a medical procedure.

Apple's Vision Pro headset assisted a surgical team at a UK hospital in performing a medical procedure. A team at Cromwell Hospital in London utilized the mixed-reality headgear to help with two microsurgical spine procedures.

The Daily Mail stated that while surgeons Fady Sedra and Syed Aftab were on the team, a scrub nurse was the one wearing the headset to support them.

The hospital was introduced to the device by eXeX, which provides tech platforms to hospitals, reports Business Insider.

Consultant orthopedic spinal surgeon Aftab stated, "Using the Apple Vision Pro with eXeX has greatly improved the way we treat our patients." The Complex Spine team is now more productive because to the software, which runs smoothly."

According to an Apple press release, a number of healthcare applications that are compatible with the headset have been available since the US launch of Apple's Vision Pro device last month.

Among these, Mako SmartRobotics from Stryker created an app specifically for doctors performing knee and hip surgeries.

In the meanwhile, Cedars-Sinai offers patients an app that offers mental health support through deep breathing exercises and meditation, while Fundamental Surgery delivers surgical training via virtual reality.

Related New Partnership Aims to Detect Brain Disorder Using Virtual Reality

In the news release, Susan Prescott, Apple's vice president of worldwide developer relations, stated: "We're thrilled to see the incredible apps that developers across the healthcare community are bringing to Apple Vision Pro."

According to CNBC, surgery preparation has also been done using Meta's virtual reality gear. In a simulated treatment in 2022, physicians at Kettering Health Dayton in Ohio utilized the Quest 2 headgear to simulate a shoulder replacement in three dimensions, according to CNBC.

Hospital operator Universal Health Services claims that virtual reality (VR) tools can assist surgeons in examining a patient's anatomy before to an operation, "much like a pilot uses a flight simulator."

Text Link

Garmin’s Approach S40 is a Stylish Smartwatch that’s Especially Designed for Golfers

Garmin unveiled another stylish smartwatch that’s designed especially for golfers.

Just a week after announcing five new luxury smartwatches in MARQ Series, Garmin unveiled another stylish smartwatch that’s designed especially for golfers. The Approach S40 is a stylish and lightweight GPS smartwatch, featuring a vibrant 1.2-inch color touchscreen display that’s sunlight-readable for everyday use and a metal bezel to bolster its elegant design.

Related Garmin Announces MARQ Series: Five New Luxury Smartwatches with Various Features

Adjustable quick release bands come in different colors. The Approach S40 integrates AutoShot Game Tracking to measure and auto-record a golfer’s detected shot distance for a more focused gaming experience. The highly responsive GPS receiver conveniently locks in on a golfer’s location and displays precise yardages to the front, middle and back of the green, hazards, doglegs and more. It also gives golfers access to over 41,000 preloaded courses from around the world.

“The hallmark of the new Approach S40 is its superb ability to offer high-quality functionality on the course as well as off the course,” said Dan Bartel, Garmin worldwide vice president of consumer sales. “The Approach S40 looks great on your wrist and gives golfers exactly what they need with a mix of high-sensitivity GPS golf accuracy on the course, and top-notch smartwatch capabilities that track everyday activities.”

Golfers also can explore an in-depth golf feature set directly from their wrist. Veterans or novice players can strategically use the Green View feature to help enhance their golf accuracy by manually dragging-and-dropping the day’s pin location on the display to gain precise yardage, the company said in a press release.

The watch can also be used to see digital scorecards with Stableford scoring. Golfers can automatically upload these scorecards to the free Garmin Golf™ app. Once the app is downloaded, they can take advantage of automatic course updates, and even review stats in real time during play or after a round with a compatible smartphone.

The rechargeable battery in the S40 lasts 15 hours on the course, and up to 10 days in smartwatch mode. A bundled version of the watch adds a three-pack of Approach CT10 club sensors that can be paired for additional automatic game tracking capabilities.

Related GolfLogix Apple Watch App Provides Golfers with Yardages, Green Images, Hole Selection and More

The Approach S40 also provides resourceful activity tracking features such as steps, sleep and built-in multisport profiles for fitness initiatives.

The Approach S40 is available now with a suggested retail price of $299.99, and the Approach S40 Bundle is $349.99.

Text Link

Smartwatch Lets You See Blood Flow Inside Your Body

A photoacoustic imaging watch has been developed by researchers.

Researchers have developed a photoacoustic imaging watch for high-resolution imaging of blood vessels in the skin. The wearable device could offer a non-invasive way to monitor hemodynamic indicators such as heart rate, blood pressure and oxygen saturation that can indicate how well a person's heart is working.

"Although photoacoustic imaging is extremely sensitive to variations in hemodynamics, difficulties in miniaturizing and optimizing the imaging interface have limited the development of wearable photoacoustic devices," said research team leader Lei Xi from the Southern University of Science and Technology in China. "To the best of our knowledge, this is the first photoacoustic wearable device that is suitable for healthcare applications."

In the Optica Publishing Group journal Optics Letters, the researchers describe their new system, which consists of a watch with an imaging interface, a handheld computer and a backpack housing the laser and power supply. Tests with volunteers moving freely showed that the device can be used to observe blood flow variations during different activities such as walking.

"Miniaturized wearable imaging systems like the one we developed could potentially be used by community health centers for preliminary disease diagnosis or for long-term monitoring of parameters related to blood circulation within a hospital setting, offering valuable insights to inform treatments for various diseases," said Xi. "With further development this type of system could also be helpful for the early detection of skin conditions such as psoriasis and melanoma or for analyzing burns."

Creating a wearable imager

Photoacoustic imaging is a label-free technique that forms images by measuring light-induced sound waves created by light absorption in structures. Analyzing the photoacoustic signal intensity and distribution offers insights into the functional and structural characteristics of microvessels, which can be altered by various diseases. Although photoacoustic imaging is still primarily a research tool, it is beginning to find clinical application in areas such as cancer, vascular and dermatological imaging.

To turn what is typically a bulky instrument into something that could be worn while moving around, the researchers developed a compact optical resolution photoacoustic microscopy system based on a compact pulsed laser, tight fiber-based light path and an integrated electronic system housed in a backpack weighing 7 kilograms. They also designed a handheld device to store the images and created a miniaturized watch-type imaging interface with an adjustable focal plane and a screen display for displaying the images in real time.

Read more Blood Pressure measuring E-Tattoo

The researchers designed the system so that it could be used for imaging while the wearer is freely moving around. It also features an adaptable laser focus, which is necessary for imaging multilayered structures like skin. The photoacoustic imaging system has a lateral resolution of 8.7 µm, which is sufficient to resolve most microvessels in the skin and a maximum field of view of around 3 mm in diameter, which is adequate for capturing microvascular details.

Tracking blood on the move

The researchers tested the device with volunteers to evaluate the focus shifting function of the watch and the system's capacity to detect blood flow changes over an extended time under different conditions, such as while the wearer was walking and when a cuff was used to temporarily block blood flow to the arm. These tests showed that the system is usable and compact and stable enough to allow free movement.

The researchers are now working to create a system that employs an even smaller laser source with a higher repetition rate. This will make the system more compact and lighter while also enhancing safety and temporal resolution. "Given the rapid development of modern laser diode technology and electronic information technology, it should be entirely feasible to develop a more advanced and intelligent photoacoustic watch that doesn't require a backpack," said Xi.

The researchers are also working to ensure the stability of the fiber-coupled optical path over extended periods and under more intense conditions such as running and jumping. They also want to incorporate multispectral illumination, which would allow the acquisition of additional physiological parameters including oxygen saturation and blood flow velocity and the quantitative assessment of parameters such as vessel number and volume. These capabilities could help support early diagnosis of conditions such as cancer and cardiovascular diseases.

Text Link

AI Robot Dog Tackles Complex Obstacle Courses

AI robot has now been trained to navigate challenging, never-before-seen obstacle courses.

Although acrobatic robot displays seem like a wonderful marketing gimmick, they are usually well planned and expertly rehearsed. A four-legged AI robot has now been trained by researchers to navigate challenging, never-before-seen obstacle courses in practical settings.

The real world's inherent complexity, the quantity of information robots can gather about it, and the speed at which judgments must be made to perform dynamic motions make building nimble robots difficult.

Organizations such as Boston Dynamics have frequently published films of their robots performing a variety of tasks, including parkour and dancing. Even while these achievements are astounding, they usually require people to laboriously program each step or repeatedly practice in extremely controlled situations, reports Edd Gent in Singularity Hub.

The ability to apply abilities in the real world is severely limited by this approach. However, using machine learning, researchers from ETH Zurich in Switzerland have taught their robot dog, ANYmal, a set of fundamental locomotive skills. With these skills learned, the dog can now navigate a wide range of difficult obstacle courses both indoors and outdoors at up to 4.5 miles per hour.

After segmenting the problem into three sections, the researchers allocated a neural network to each part in order to develop a system that was both adaptable and capable. Initially, they developed a perception module that builds an image of the terrain and any obstructions in it using data from lidar and cameras.

Related Robot Completes Surgery in Space

They integrated this with a locomotion module that has picked up a wide range of abilities, such as jumping, climbing, crouching, and climbing down, to help it get over various barriers. Ultimately, these modules were combined with a navigation module that could determine which abilities to use to overcome each obstacle and plot a path through a sequence of them.

Instead of using human examples throughout the training process, the researchers exclusively used reinforcement learning, or trial and error. This allowed them to train the AI model on a huge number of randomized scenarios without having to manually label each one.

The fact that the robot uses chips that are implanted within it rather than relying on external computers is another amazing aspect. The researchers also demonstrated that ANYmal could overcome falls or slides in order to finish the obstacle course, in addition to being capable of handling a wide range of conditions.

Nevertheless, the research shows that robots are getting better at functioning in challenging real-world settings. That implies that they might soon be considerably more noticeable everywhere.

Text Link

Printed and Flexible Sensor Market Poised to Grow

Printed and flexible sensor technology sectors are expected to experience growth.

In the modern world, sensors—of which some are printed and flexible—are essential. They serve as the link between the real and virtual worlds, measuring an enormous variety of physical properties. Printed sensors, as the name suggests, are sensors that are printed onto rigid or flexible substrates utilizing functional inks that may be processed in a solution. As a result, printed sensors may be made at drastically lower costs in huge quantities utilizing proven manufacturing processes.

Printed and flexible sensors can measure a plethora of physical interactions, including touch, force, pressure, displacement, temperature, electrical signals, as well as detecting gases. One of the earliest, and now most ubiquitous, printed sensor technologies is printed force sensors, which are found in cars for seat belt occupancy detection. Printed sensors find applications in commercial sectors such as automotives, healthcare, wearables, consumer electronics, industry, and logistics.

With regard to this subject, IDTechEx is in a very special position. The analyst team draws from many years of following new technology markets, with a focus on printed electronics—a crucial component of printable and flexible sensors—among them. In the past, IDTechEx has provided assistance for this by concurrently hosting the top industry conferences and exhibitions pertaining to printed, flexible, and wearable electronics. The analysis in this report was made easier by IDTechEx's special capacity to create a network in certain subject areas.

This report critically evaluates eight printed sensor technologies, covering printed piezoresistive sensors and force sensors (FSRs), piezoelectric sensors, photodetectors, temperature sensors, strain sensors, gas sensors, capacitive touch sensors, and wearable electrodes. The report also discusses areas of innovation in manufacturing of printed sensors, including focus on emerging material options as well as the technology underlying the manufacturing process. This report characterizes each application of printed sensors, discussing the relevant technology, product types, competitive landscape, industry players, pricing, as well as key meta-trends and drivers for each sector. The report also contains detailed printed and flexible sensors market forecasting over 10 years for each of the key printed sensor technology areas.

The research behind the report has been compiled over many years by IDTechEx analysts. It builds on existing expertise in areas such as sensors, wearable technology, flexible electronics, stretchable and conformal electronics, smart packaging, conductive inks, nanotechnology, future mobility and electronic textiles. The methodology involved a mixture of primary and secondary research, with a key focus on speaking to executives, engineers, and scientists from companies developing printed and flexible sensors. As such, the report analyses all known major companies and projects, including over 35 profiles.

Related The Potential of 3D Printed Electronic Skin

Regarding the eight printed sensor technology sectors that are concerned, this study offers vital market intelligence. This comprises:

This report provides critical market intelligence about the 8 printed sensor technology areas involved. This includes:

A review of the context and technology behind printed and flexible sensors:

• History and context for each technology area
• General overview of important technologies and materials
• Overall look at printed and flexible sensor trends and themes within each technology area
• Benchmarking and analysis of different players throughout

Text Link

Fraunhofer FEP’s Microdisplays and Sensors Business Unit

Fraunhofer FEP’s microdisplays and sensors business unit has been integrated into Fraunhofer IPMS.

The Microdisplays and Sensors business unit at the Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP will be integrated into the Fraunhofer Institute for Photonic Microsystems IPMS with retroactive effect from January 1, 2024. Both institutes are closely connected, particularly within this business unit, and share infrastructure at the Dresden site. By pooling expertise and streamlining structures, we anticipate the creation of synergies that will strengthen the research field, expedite development and thus benefit customers and partners.

There is rapid development in the market for the microdisplays used in augmented reality (AR), virtual reality (VR) and mixed reality (MR) applications (often collectively referred to as XR) and this will be an important growth market of the future. The integration of OLED and µLED frontplane technologies in CMOS backplanes is not only the key to success in this sector but also the technological basis for near-to-eye visualization of information. Fraunhofer IPMS and Fraunhofer FEP have now decided, in consultation with the Fraunhofer-Gesellschaft, to integrate the Fraunhofer FEP Microdisplays and Sensors business unit into Fraunhofer IPMS. Their goal is to leverage synergies in the area of infrastructure, pool expertise and establish a unique profile for both institutes. Fraunhofer IPMS has long been one of the leading institutes in microelectronics and microsystems engineering.

Over the past ten years, the Microdisplays and Sensors business unit has developed into a globally successful and established player under the umbrella of Fraunhofer FEP. At the current stage of development, the transfer to an institute specialized in microelectronics offers suitable conditions to further develop the business unit. This will also allow Fraunhofer FEP, as a process-oriented institute, to focus even more on its expertise in electron beam and plasma technology. This transfer provides technological solutions for the growing demand in the fields of energy, sustainability, life sciences and environmental technologies for industry and society — now and in the future.

“By integrating the Fraunhofer FEP Microdisplays and Sensors business unit into Fraunhofer IPMS, we are pooling our expertise and ensuring the best possible use of our infrastructure. This will also increase our chances to win projects with the Microelectronics group. The transfer is a good example of the strategic development of a research field and the leveraging of synergies across institutes,” says Prof. Holger Hanselka, President of the Fraunhofer-Gesellschaft. “This will strengthen the research field and pave the way for new technological capabilities in the field of microdisplays by leveraging the synergies of the existing microelectronics infrastructure. The close relationship of the institutes at the Dresden site will ensure seamless and continuous advancement in this field. My special thanks go to all those involved for their contributions.”

Related Fraunhofer ISE Develops World's Most Efficient Solar Cell

Harald Schenk, Director of Fraunhofer IPMS, added: “In the future, Fraunhofer IPMS will increase its activities in this area and focus more on the heterogeneous integration of various chiplet technologies in conjunction with CMOS microelectronics. This future-oriented technology includes the integration of organic semiconductors (e.g., OLEDs) and novel emitter technologies (e.g., µLEDs), which will open up new avenues in micro/optoelectronics and microsystems engineering.”

Elizabeth von Hauff, Director of Fraunhofer FEP, said: “The Microdisplays and Sensors business unit has played a significant role in Fraunhofer FEP’s dynamic development. We are proud of this and would like to thank our employees and managers for their dedication. Transferring to Fraunhofer IPMS will open up additional development potential for the business unit and enable Fraunhofer FEP to focus on strategic topics in the field of electron beam and plasma technologies.”

About Fraunhofer IPMS

The Fraunhofer Institute for Photonic Microsystems IPMS is one of the leading research and development service providers in the application fields of intelligent industrial solutions and manufacturing, medical technology and health, and mobility. Research focuses on miniaturized sensors and actuators, integrated circuits, wireless and wired data communication and customized MEMS systems. In state-of-the-art clean rooms, the institute researches and develops solutions on 200 mm and 300 mm wafers. Services range from consulting and process development to pilot production.

About Fraunhofer FEP

The Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP focuses on the development of innovative solutions, technologies and processes for surface finishing. This work is based on the institute’s expertise in the fields of electron beam technology, plasma-assisted large-area and precision coating, roll-to-roll technologies and the development of key technological components.

Fraunhofer FEP thus offers a broad spectrum of research, development and pilot production options, especially for the treatment, sterilization, structuring and refinement of surfaces but also liquids and gases.

Text Link

Biocompatible Sticker Detects Post Surgical Leaks

A sticker that allows medical professionals to check on the condition of a patient's deep tissues.

Patients recuperating from gastrointestinal surgery may soon find their lives saved by a little, straightforward sticker.

Researchers led by Northwestern University and Washington University School of Medicine in St. Louis have developed a new, first-of-its-kind sticker that enables clinicians to monitor the health of patients' organs and deep tissues with a simple ultrasound device.

BioSum, an acronym for "Bioresorbable, Shape-adaptive, Ultrasound-readable Materials," was created by Northwestern University's Prof. John A. Rogers and postdoctoral fellow Jiaqi Liu. Dr. Hammill initiated the study, and led the evaluation of the prototype.

The BioSUM takes the form of a thin, flexible, biocompatible sticker, made up of several spaced-apart metal discs embedded in a pH-responsive hydrogel base. When attached to an organ, the soft, tiny sticker changes in shape in response to the body's changing pH levels, which can serve as an early warning sign for post-surgery complications such as anastomotic leaks. Clinicians then can view these shape changes in real time through ultrasound imaging. As long as no leakages occur, the BioSUM stays in its default state, reports NewAtlas.

Currently, no existing methods can reliably and non-invasively detect anastomotic leaks - a life-threatening condition that occurs when gastrointestinal fluids escape the digestive system. By revealing the leakage of these fluids with high sensitivity and high specificity, the non-invasive sticker can enable earlier interventions than previously possible. Then, when the patient has fully recovered, the biocompatible, bioresorbable sticker simply dissolves away; bypassing the need for surgical extraction.

"These leaks can arise from subtle perforations in the tissue, often as imperceptible gaps between two sides of a surgical incision," said Northwestern's John A. Rogers, who led device development with postdoctoral fellow Jiaqi Liu. "These types of defects cannot be seen directly with ultrasound imaging tools. They also escape detection by even the most sophisticated CT and MRI scans. We developed an engineering approach and a set of advanced materials to address this unmet need in patient monitoring. The technology has the potential to eliminate risks, reduce costs and expand accessibility to rapid, non-invasive assessments for improved patient outcomes."

Related Johnson & Johnson Partners With Microsoft For Digital Surgery Solutions

"Right now, there is no good way whatsoever to detect these kinds of leaks," said gastrointestinal surgeon Dr. Chet Hammill, who led the clinical evaluation and animal model studies at Washington University with collaborator Dr. Matthew MacEwan, an assistant professor of neurosurgery. "The majority of operations in the abdomen; when you have to remove something and sew it back together; carry a risk of leaking. We can't fully prevent those complications, but maybe we can catch them earlier to minimize harm. Even if we could detect a leak 24- or 48-hours earlier, we could catch complications before the patient becomes really sick. This new technology has potential to completely change the way we monitor patients after surgery."

To evaluate the efficacy of the new sticker, Hammill's team tested it in both small and large animal models. In the studies, ultrasound imaging consistently detected changes in the shape-shifting sticker -; even when it was 10 centimeters deep inside of tissues. When exposed to fluids with abnormally high or low pH levels, the sticker altered its shape within minutes.

Rogers and Hammill imagine that the device could be implanted at the end of a surgical procedure. Or, because it's small and flexible, the device also fits (rolled up) inside a syringe, which clinicians can use to inject the tag into the body.

Text Link

Solar Panels in Your Eyeballs to Restore Vision

Scientists are working on implanting small solar panels inside people's eyes to restore vision.

While it may sound like science fiction, a group of Australian scientists are actually working on implanting small solar panels inside people's eyes. Patients with irreversible eye illnesses may have a significantly better quality of life because to novel technologies.

UNSW researcher Dr Udo Roemer is an engineer who specializes in photovoltaics, known more commonly as solar panel technology. He is in the early stages of researching how solar technology can be used to convert light entering the eye into electricity, bypassing the damaged photoreceptors to transmit visual information to the brain.

“People with certain diseases like retinitis pigmentosa and age-related macular degeneration slowly lose their eyesight as photoreceptors at the center of the eye degenerate,” Dr Roemer says.

“It has long been thought that biomedical implants in the retina could stand in for the damaged photoreceptors. One way to do it is to use electrodes to create voltage pulse that may enable people to see a tiny spot.

“There have already been trials with this technology. But the problem with this is they require wires going into the eye, which is a complicated procedure.”

But an alternative idea is to have a tiny solar panel attached to the eyeball that converts light into the electric impulse that the brain uses to create our visual fields. The panel would be naturally self-powered and portable, doing away for the need to have cables and wires into the eye, report Lachlan Gilbert in UNSW News.

Dr Roemer isn’t the first to investigate the use of solar cells assisting in restoring sight. But rather than focus on silicon-based devices, he has turned his attention to other semiconductor materials such gallium arsenide and gallium indium phosphide, mainly because it’s easier to tune the materials’ properties. It’s also used in the solar industry at large to make much more efficient solar panels, although it’s not as cheap as the all-purpose silicon.

“In order to stimulate neurons, you need a higher voltage than what you get from one solar cell,” Dr Roemer says.

“If you imagine photoreceptors being pixels, then we really need three solar cells to create enough voltage to send to the brain. So we’re looking at how we can stack them, one on top of the other, to achieve this.

“With silicon this would have been difficult, that’s why we swapped to gallium arsenide where it’s much easier.”

Related Spiral Lens Gives you Clearer Vision

So how far along is this research?

Dr Roemer says it’s in the proof-of-concept stage.

“So far we’ve successfully put two solar cells on top of each other in the lab on a large area – about 1cm2, which has got some good results.”

The next step will be to make them into the tiny pixels required for sight, and etching the grooves to separate them. It will then be a small step to increase the stack to three solar cells.

Dr Roemer envisages by the time this technology will be able to be tested in humans – after extensive testing in the lab, followed by testing in animal models – the device will be about 2mm2 in size with pixels measuring about 50 micrometers (five hundredths of a millimeter). He stresses that it’s still a way down the track before this technology will be implantable in the retinas of people with degenerative eye diseases.

“One thing to note is that even with the efficiencies of stacked solar cells, sunlight alone may not be strong enough to work with these solar cells implanted in the retina,” he says.

“People may have to wear some sort of goggles or smart glasses that work in tandem with the solar cells that are able to amplify the sun signal into the required intensity needed to reliably stimulate neurons in the eye.”

Text Link

FDA Clears Dexcom Stelo Glucose

FDA has cleared Stelo by Dexcom – the first glucose biosensor that doesn’t require a prescription.

DexCom, the global leader in real-time continuous glucose monitoring for people with diabetes, announced today that the FDA has cleared Stelo by Dexcom – the first glucose biosensor that doesn’t require a prescription. There are approximately 25 million people in the U.S. living with Type 2 diabetes who do not use insulin and who can benefit from continuous glucose monitoring (CGM) technology. Today, Dexcom G7 is available for them with a prescription. Stelo, cleared for use without a prescription, will make it even easier for this population to access leading CGM technology, and will provide an option for those who do not have insurance coverage for CGM.

Related DarioHealth to Integrate Dexcom CGM Data

“Dexcom continues to lead innovation in the CGM market, with a long list of first-in-market advances. Dexcom was the first to connect CGM to multiple insulin delivery devices, the first to connect CGM to a smartphone, the first to replace fingersticks for treatment decisions, and now is creating a new category by bringing the first glucose biosensor cleared for use over-the-counter,” said Jake Leach, executive vice president and chief operating officer at Dexcom. “Based on our experience serving people with Type 2 diabetes not using insulin, we have developed Stelo with their unique needs in mind.”

Continuous glucose monitoring plays an integral role in the management of Type 2 diabetes and the benefits are proven when used alone, or alongside other diabetes and weight management medications.8 Studies show the use of Dexcom continuous glucose monitoring by people with Type 2 diabetes is associated with clinically meaningful improvement in time in range, A1c and quality of life.9-12, reports Business Wire.

“Use of CGM can help empower people with diabetes to understand the impact of different foods and activity on their glucose values,” said Dr. Tamara Oser, MD, Family Physician. “For people newly diagnosed with Type 2 diabetes or not taking insulin, these devices are often not covered by insurance and Stelo presents an opportunity to provide valuable information that can impact their diabetes management.”

Stelo will be available for purchase online without a prescription starting summer 2024.

About DexCom

DexCom, Inc. empowers people to take real-time control of health through innovative continuous glucose monitoring (CGM) systems. Headquartered in San Diego, Calif., and with operations across Europe and select parts of Asia/Oceania, Dexcom has emerged as a leader of diabetes care technology. By listening to the needs of users, caregivers, and providers, Dexcom works to simplify and improve diabetes management around the world.

Text Link

Smart Glasses Use Eye Tracking Via Sonar

Researchers have developed prototypes of a technology that uses sonar for tracking eye movements.

Prototypes of a sonar-like device have been created by Cornell University in New York, and it may eventually replace cameras for tracking eye movements. It makes use of tiny speakers that produce music at frequencies higher than 18 kHz for each eye. The majority of people cannot hear these.

Four microphones on either side of the headset receive sound that is aimed at the wearer's face, reflects it, and then records it. These sound waves are then interpreted by an algorithm known as GazeTrak, which helps the researchers discern the direction of the wearer's gaze, reports Mixed.

The research team believes that sonar technology should provide a number of benefits. When compared to camera-based systems, it uses less electricity and gives consumers greater privacy because the cameras are not continuously recording. Additionally, it might lessen the weight and production costs of VR headsets.

Sonar-based eye tracking demonstrated an accuracy of up to 3.6 degrees in tests involving 20 individuals. Compared to modern high-end gadgets like the Apple Vision Pro, this is not as accurate. But according to the experts, this performance ought to be adequate for the majority of virtual reality applications.

Related These AR Glasses Can Translate Languages and Detect Images

As of now, the technology has one significant flaw: each user's eyeball is unique, hence the AI model employed by GazeTrak needs to be trained individually. It would take the collection of sufficient data to produce a universal model in order to market the eye-tracking sonar.

Virtual reality relies heavily on eye tracking technology, which lets you focus at certain spots on menus to navigate or make eye contact with other avatars in virtual surroundings. Right now, Apple Vision Pro is showing off how accurate eye tracking can enhance the user experience.

Additionally, eye tracking allows for rendering—such as on the Playstation VR 2—that takes into account the user's focus by displaying a precise depiction of the area being watched and a less detailed picture in the periphery. Ingenious control techniques are also made possible by technology; one example is the VR game Before Your Eyes, which you can only control with your eyes.

Text Link

New AI Algorithm Receives FDA Clearance

A Technology that enables AI-powered sleep diagnosis using pulse oximetry devices.

EnsoData, a pioneer in healthcare AI, achieved FDA 510(k) clearance for groundbreaking technology that enables AI-powered sleep diagnosis using FDA-cleared pulse oximetry devices. Powered by EnsoSleep PPG scoring, widely available and wearable pulse ox technology can be deployed for a high quality, accessible, and cost-effective approach to diagnosing sleep disorders, including sleep apnea.

Sleep apnea is a highly prevalent but often undiagnosed condition that exacerbates cardiovascular diseases like high blood pressure and congestive heart failure, neurodegenerative diseases like Alzheimer's, metabolic disorders including diabetes, stroke, and more. It is estimated that over 29.4 million Americans have sleep apnea, with more than 80% of cases still undiagnosed.

With early and accurate diagnosis of sleep apnea, clinicians can help prevent complications and reduce healthcare expenses, not only saving lives but also having a profound impact on healthcare economics.

EnsoSleep, EnsoData's previously FDA-cleared diagnostic AI analysis and sleep scoring solution, uses machine learning to analyze data from traditional sleep studies to aid physicians in diagnosing sleep disorders. With this new clearance, EnsoSleep PPG will provide more opportunities for clinicians to effectively reach an undiagnosed patient population by enabling AI-driven analysis using the photoplethysmogram (PPG) signals recorded by pulse oximeters.

Related Sleep Disorder Diagnosis Software Receives FDA Approval

"Expanding EnsoData's capability to collect and analyze PPG signals from simple, wearable pulse ox devices will accelerate the identification, diagnosis and treatment of sleep disordered breathing events, including sleep apnea," said Justin Mortara, President and CEO of EnsoData. "With this latest FDA clearance, we expect to build upon and diversify our partner ecosystem to reach more patients with our leading AI solutions."

Compared to earlier generations of sleep diagnostic equipment, pulse ox devices are smaller and less expensive. They can be as simple to wear as a ring or watch and record physiological data related to sleep and breathing, such as a patient's oxygen saturation levels and heart rate.

Using this data, EnsoSleep PPG's deep learning models automatically detect respiratory events, including sleep disordered breathing events such as apneas or hypopneas, sleep stages including REM, deep sleep, light sleep, wake, and other sleep measures, which may be displayed and edited by a qualified healthcare professional and then exported into a final sleep report for a patient.

By lowering the barrier for patients to receive an accurate sleep test to more widely available pulse ox devices, clinicians can expedite the diagnostic process and provide patients with answers to their health problems more quickly.

"Our interoperable AI tools are democratizing the ability to accurately measure sleep and aid in diagnosis of sleep disorders broadly – for the existing category of FDA-cleared pulse oximetry devices and sensors that are already widely deployed, in-use clinically, and growing in their adoption daily. With PPGs among the most commonplace of medical waveforms collected across healthcare settings, from diagnostic tests to bedside monitors and consumer wearables, this will be transformative for patient access and outcomes to achieve better sleep and overall health," said Chris Fernandez, Co-founder and CRO of EnsoData. "This FDA clearance marks a pivotal moment in sleep diagnostics, and also long-term therapy monitoring and management, where enhanced accessibility and affordability can create a new normal in sleep care."

About EnsoData

EnsoData is a healthcare technology company that uses artificial intelligence (AI) and machine learning (ML) technology to perform complex and time-consuming data interpretation and analysis to help diagnose and treat health conditions.

Text Link

Smart Prosthetic Lets Man Feel Hot and Cold

A new device that makes it possible for a person with an amputation to sense temperature.

The history of prosthetics begins in ancient Egypt. There are indications that the "Cairo toe" is more than just a cosmetic deformity. The wood-and-leather digit was made to be flexible even 3,000 years ago, indicating that it served a purpose in addition to being aesthetically pleasing.

Since then, a lot of effort has gone into coming up with creative ways to make the lives of those who have had amputations better. Engineers started adding wires, gears, and springs to prostheses as early as the 15th century to allow users to grab objects and bend joints, albeit in a restricted way. Prosthetics were designed to carry out particular functions, including playing the piano or shield-holding. Additionally, they become increasingly accustomed to handling the materials of discovery, such as rubber, polymers, and thin metals.

Now, a team of researchers in Italy and Switzerland have developed a new device that makes it possible for a person with an amputation to sense temperature with a prosthetic hand. The advancement of technology is a step toward the creation of prosthetic limbs that fully restore a person's senses, increasing their utility and acceptance by the wearer, reports Science News.

The device, dubbed "MiniTouch," was affixed to the prosthetic hand of a 57-year-old man named Fabrizio who had his wrist amputated above. The researchers were based in Switzerland and Italy. In experiments, the guy demonstrated perfect accuracy in identifying cold, cool, and hot bottles of liquid; much greater accuracy than chance in distinguishing between plastic, glass, and copper; and approximately 75% accuracy in sorting steel blocks according to temperature.

The prosthetic works by applying heat or cold to the skin on the upper arm in specific locations that trigger a thermal sensation in the phantom hand.

Related New Exoskeleton Helping Disabled Walk, Stand

“In a previous study, we have shown the existence of these spots in the majority of amputee patients that we have treated,” says Solaiman Shokur at the Swiss Federal Institute of Technology in Lausanne.

Fidati was also able to accurately identify glass, copper, and plastic by touch when wearing a prosthetic, with a precision of slightly over two-thirds, exactly like he could with his unharmed left hand.

In a different, recently published study, Shokur and his associates demonstrated that individuals with amputations who use prosthetics that sense temperature can distinguish between moist and dry objects.

“We could provide a wetness sensation to amputees and… they were as good at detecting different levels of moisture as with their intact hands,” says Shokur.

Text Link

Robot Startup Secures $675M, Inks Partners With OpenAI

Figure has raised $675M in Series B funding and entered into a collaboration agreement with OpenAI.

Figure, an AI robotics company developing general purpose humanoid robots, announced that it has raised $675M in Series B funding at a $2.6B valuation with investments from Microsoft, OpenAI Startup Fund, NVIDIA, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. This investment will accelerate Figure's timeline for humanoid commercial deployment.

In conjunction with this investment, Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI's research with Figure's deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure's commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.

"We've always planned to come back to robotics and we see a path with Figure to explore what humanoid robots can achieve when powered by highly capable multimodal models. We're blown away by Figure's progress to date and we look forward to working together to open up new possibilities for how robots can help in everyday life," said Peter Welinder, VP of Product and Partnerships at OpenAI.

The Figure team, made up of top AI robotics experts from Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation, has made remarkable progress in the past few months in the key areas of AI, robot development, robot testing, and commercialization (Figure recently announced its first commercial agreement with BMW Manufacturing to bring humanoids into automotive production). This new capital will be used strategically for scaling up AI training, robot manufacturing, expanding engineering headcount, and advancing commercial deployment efforts.

Figure will leverage Microsoft Azure for AI infrastructure, training, and storage. "We are excited to collaborate with Figure and work towards accelerating AI breakthroughs. Through our work together, Figure will have access to Microsoft's AI infrastructure and services to support the deployment of humanoid robots to assist people with real world applications," said Jon Tinter, Corporate Vice President of Business Development at Microsoft.

Related Robot Completes Surgery in Space

"Our vision at Figure is to bring humanoid robots into commercial operations as soon as possible. This investment, combined with our partnership with OpenAI and Microsoft, ensures that we are well-prepared to bring embodied AI into the world to make a transformative impact on humanity," said Brett Adcock, Founder and CEO of Figure. "AI and robotics are the future, and I am grateful to have the support of investors and partners who believe in being at the forefront."

About Figure

Figure is an AI Robotics company developing autonomous general purpose humanoid robots. Our Humanoid is designed for initial deployment into the workforce to address labor shortages, jobs that are undesirable or unsafe, and to support supply chain on a global scale. Figure is a Sunnyvale, California-based company with a team of 80 employees, founded 21 months ago.

Text Link

Dubai Startup Launches Iron Man Inspired Contact Lens

Xpanceo has released prototypes of smart contact lenses, which were inspired by the Iron Man movies.

Dubai-based start-up Xpanceo has unveiled prototypes of its smart contact lens, which aims to mimic the technology featured in the Iron Man films.

Xpanceo is founded by Roman Axelrod, a former engineer at Google and Facebook, who was inspired by the fictional character Tony Stark also known as Iron Man, and his advanced artificial intelligence system, JARVIS. Axelrod set out to design a wearable that would provide a smooth and engaging interface between people and machines without requiring cumbersome screens, headphones, or glasses.
Axelrod said that the business, which raised $40 million in a seed fundraising round in October, intends to put the device on stores by 2027 following human studies that are expected to be completed in two years, reportsITC.

"It usually takes from half a year to several years [to conduct human trials]. After that, they will be on shelves, I would say [by] 2027 or 2028, in optical stores," he said.

The four Xpanceo smart contact lenses with different functions shown at Mobile World Congress 2024 will be combined into one universal device in the future.

The purpose of the sensor lens is to gauge eye pressure and notify the user of any signs of glaucoma. Nanoparticles are used by the "super vision" lens to enlarge images or enhance visibility in low light. People did not, however, put the lenses into their eyes during the show; instead, their capabilities were displayed on specialized platforms.

Related Smart Contact Lens Powered by Salt Water

Xpanceo intends to incorporate a neuro interface into the apparatus, enabling direct mental control of the lens. The glucose level sensors should be added to the list of monitoring levels of cortisol, blood pressure, etc. If your blood pressure is elevated, the lens will be able to alert you to the need to avoid drinking another cup of coffee.

In addition to giving vision superpowers, the nanoparticles will treat vision-related disorders like myopia and strabismus. The lens will dynamically alter its own properties as needed to provide consistently good vision.

Because it offers a customized, interactive, and augmented reality experience, the smart contact lens has the potential to completely transform a number of industries, including communication, education, entertainment, health, and security. Those with diseases, disabilities, or visual impairments may benefit from the lens in terms of their quality of life and talents.

No items found.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.