-
Sign Companion is a full-stack web application developed using deep learning, computer vision, and web technologies.
-
It is designed to assist differently-abled individuals, particularly those who are deaf, mute, or blind, by translating Indian Sign Language (ISL) into text and converting text or speech into sign language.
| Overview | Features | User Interface | Performance | Additional Information |
|---|---|---|---|---|
| Sign Companion | Sign to Text | Sign to Text | Confusion Matrix | Purpose |
| Technology Stack | Text to Sign | Text to Sign | Accuracy and Loss Graphs | Contribution |
| Speech to Sign | Speech to Sign | Model Architecture | Future Enhancements | |
| Contact Us | ||||
| About Us | ||||
| Login | ||||
| Signup |
-
Description: The user uploads or captures a photo of their hand showing a specific ISL sign. The system translates this sign into text.
-
UI Screenshot:
-
Description: Users can enter text, which is then converted into corresponding ISL signs and displayed.
-
UI Screenshot:
-
Description: Users speak into the microphone, and the spoken words are converted into ISL signs.
-
UI Screenshot:
This feature translates Indian Sign Language (ISL) signs into corresponding text, making communication easier for individuals who use ISL. The process involves several steps to ensure accurate recognition and translation of the sign. Hereโs a detailed overview:
-
Take a Normal Picture:
-
Add Contrast:
-
Hand Detection Using cvzone Library:
-
Isolate Hand and Key Points:
-
Crop the Hand:
This feature converts text input into sign language. The process is as follows:
-
Extract Each Character from Text:
- The system extracts each character from the given text.
-
Fetch Corresponding Sign Image:
- For each character, the corresponding image representing that letter in ISL is fetched and displayed.
This feature converts spoken words into sign language:
-
Generate Transcript from Speech:
- The userโs speech is captured using a speech recognition library in React. The speech is transcribed into text.
-
Convert Text to Sign:
- Once the text is generated, the same process as the "Text to Sign" feature is applied. Each character from the transcript is fetched and displayed as the corresponding sign image.
- Backend: Django
- Frontend: React, HTML, CSS, JavaScript
- Deep Learning Model: TensorFlow-based Convolutional Neural Network (CNN) trained on a custom dataset of 31,200 training samples and 6,240 testing samples, achieving 99% accuracy.
These are some examples of images used for training the model:
-
Description: An example image from the training dataset showing the ISL sign for different letters.
-
Image:
These are some examples of images used for testing the model:
-
Description: An example image from the testing dataset showing different signs for different letters.
-
Image:
The confusion matrix is used to visualize the performance of the model on the test dataset. It shows the number of correct and incorrect classifications for each sign:
The following graphs display the training and testing accuracy and loss over epochs:
The CNN architecture used for training the model is as follows:
- Input Layer: Process the image input.
- Convolutional Layers: Multiple convolution layers are applied to extract features from the images.
- Max Pooling Layers: Reduce the dimensionality of feature maps while retaining important information.
- Fully Connected Layer: Flatten the feature maps and pass them through fully connected layers for classification.
- Output Layer: Classify the image into the corresponding ISL alphabet.
Hereโs a visual representation of the model architecture:
Sign Companion aims to bridge the communication gap for individuals who are differently-abled, particularly by translating Indian Sign Language (ISL). It is designed to be user-friendly and accessible to all, whether or not they are familiar with ISL.
- Differently-abled individuals can communicate more easily with others.
- Anyone can learn and communicate in Indian Sign Language through text or speech.
Contributions and feedback are welcome! Please feel free to open issues or submit pull requests to improve the platform.
- Expand the dataset to include more signs and gestures.
- Add support for additional languages and sign systems.
- Improve speech-to-sign accuracy and robustness.











