Editorial Feature

Translating Sign Language into Voice and Text Using Sensors

Image Credit: Daisy Daisy/Shutterstock.com

Several different types of sensors have been developed to translate sign language into spoken and/or written words. One recent Nano Energy study discusses a novel self-powered triboelectric flex sensor (STFS) that can successfully sense and translate sign language into both voice and text.

Sensor Solutions for Sign Language Interpretation

Sign language is a widely used communication tool that people with speech and/or hearing disorders use.

Since everyone does not easily understand sign language, recent technological efforts have been made towards developing systems capable of converting sign language hand gestures into audio or text.

Sensors, particularly those associated with high sensitivity, mechanical flexibility, and rapid response times, have received a considerable amount of attention as potential alternative solutions for sign language interpretation (SLI). While many types of sensors have been evaluated for this application, their general working principle is based on the sensing and subsequent conversion of human hand gestures and finger bending into different forms of speech.

Human Skin-Like Sensors

Several different types of nanotechnologies such as nanowires, nano-pyramids, and nano-hemispheres have been used to improve SLI sensor sensitivity and range of detection. Despite their potential added benefits to these sensors, such micro-nano structures often require extensive and highly complex fabrication processes that limit their practical application.

Bio-inspired designs that closely resemble human skin have been investigated. Human skin has intermediate ridge-structured layers that separate the dermis (the innermost layer of skin that sits directly on top of subcutaneous tissue) from the outermost layer of skin called the epidermis. As tactile signals such as pressure, temperature and pain are received by the skin, stress arises between the dermal and epidermal layer, which allows the signal to be transmitted to the appropriate receptor.

Several different types of pressure sensors and electronic skins (e-skins) have been developed based on the mechanoreceptor tactile sensing function of human skin. The transduction modes that have been applied to these sensors include capacitive, piezoresistive and triboelectric. However, both piezoresistive and capacitive sensors have high-voltage requirements, making them bulky and flexibility limitations in SLI applications.

What pressure sensors are on the market today?

Comparatively, triboelectric sensors, which function by coupling triboelectrification with electrostatic induction, have gained a considerable amount of attention as a new sensing mechanism for e-skin sensors. In particular, triboelectric nanogenerators, which require minimal stimulation to generate a large electrical signal, can comprise various types of materials while maintaining optimal sensitivity and performance output rates.

The Newly Designed STFS

Researchers from the Department of Electronics Engineering at Kawngwoon University in Seoul, South Korea, have recently published the design and results of a novel skin-inspired self-powered triboelectric sensor (STFS).

Inspired by human skin morphology, this revolutionary flex sensor consists of randomly distributed microstructured (RDM) triboelectric layers, electrodes and spacers, all of which are encapsulated within a thin and flexible layer of polydimethylsiloxane (PDMS) material that ensures protection of the sensor device from the potentially damaging external environment.

This random collection of microstructures closely resembles the dermis-epidermis intermediate layer and is responsible for conducting tactile stimuli. The triboelectric layer within the STFS consists of both a thin Nylon 6/6 film and a thin polytetrafluoroethylene (PTFE) film, which was coated in sputtered titanium/copper (Ti/Cu).

The Nylon 6/6 and PTFE triboelectric materials were chosen due to their shared strong affinity towards each other, as the Nylon 6/6 is positively charged and the PTFE is negatively-charged.

Upon separation of these two triboelectric layers, the PTFE layer acquires electrons from the Nylon and gains a negative charge. The effective contact area will determine this charge transfer between the PTFE and Nylon 6/6 layers.

To further resemble the architecture of human skin, the researchers utilized the process of thermal embossing of an emery paper with a grit size of P800. Taken together, the fabrication method responsible for producing the STFS is not only reproducible but is cost-effective and straightforward.

How does the STFS Translate Sign Language?

In the absence of an external stimulus, the gap between the triboelectric materials is maintained at 1 millimeter (mm), creating a zero-surface charge.

Upon any bending of the finger, the top and bottom triboelectric layers will come into contact with each other. This contact will increase the surface contact area of the interlocking RDM layers and create a difference in the electron affinity of both the Nylon 6/6 and PTFE layers. Once the bent finger is released, the sensor's pressure returns to zero and subsequently stops the charge transfer. The fabricated STFS was found to achieve high sensitivity of 0.77 VkPa-1, a rise time of 83 milliseconds (ms), pressure detection capabilities within the range of 0.2 kPa up to 500 kPA, as well as exceptional stability that exceeds 100,000 loading-unloading cycles.

To test this sensor's sign language translation capabilities, the researchers attached five STFS sensors onto a glove with double adhesive tape. The glove was then worn by testers who wore a second glove over the sensor glove to ensure that they did not detach.

All sensors were placed over each finger's phalangeal joints, which are considered to be the most sensitive in terms of sign language finger gestures. All bending angles can vary between 0° to 90°, which will generate a voltage that will not exceed 20 V.

The accuracy of the sensors’ translation capabilities was confirmed by using standard American Sign Language (ASL) gestures that were translated into corresponding voice and text. All processed signals passed through an analog to digital converter (ADC), which then sent the signals to a microcontroller that transmitted the signals to an Android smartphone application through Bluetooth technology.

References and Further Reading

Maharjan, P., Bhatta, T., Salauddin, M., et al. (2020). A human skin-inspired self-powered flex sensor with thermally embossed microstructured triboelectric layers for sign language interpretation. Nano Energy 76. doi:10.1016/j.nanoen.2020.105071.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Benedette Cuffari

Written by

Benedette Cuffari

After completing her Bachelor of Science in Toxicology with two minors in Spanish and Chemistry in 2016, Benedette continued her studies to complete her Master of Science in Toxicology in May of 2018. During graduate school, Benedette investigated the dermatotoxicity of mechlorethamine and bendamustine; two nitrogen mustard alkylating agents that are used in anticancer therapy.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Cuffari, Benedette. (2020, October 22). Translating Sign Language into Voice and Text Using Sensors. AZoSensors. Retrieved on November 21, 2024 from https://www.azosensors.com/article.aspx?ArticleID=2044.

  • MLA

    Cuffari, Benedette. "Translating Sign Language into Voice and Text Using Sensors". AZoSensors. 21 November 2024. <https://www.azosensors.com/article.aspx?ArticleID=2044>.

  • Chicago

    Cuffari, Benedette. "Translating Sign Language into Voice and Text Using Sensors". AZoSensors. https://www.azosensors.com/article.aspx?ArticleID=2044. (accessed November 21, 2024).

  • Harvard

    Cuffari, Benedette. 2020. Translating Sign Language into Voice and Text Using Sensors. AZoSensors, viewed 21 November 2024, https://www.azosensors.com/article.aspx?ArticleID=2044.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.