Published on Wed Jan 01 2020

Smart Summarizer for Blind People

Mona teja K, Mohan Sai. S, H S S S Raviteja D

In today's world, time is a very important resource. This paper deals with an efficient method to summarize news into important keywords so as to save the efforts to go through the complete text every single time.

0
0
0
Abstract

In today's world, time is a very important resource. In our busy lives, most of us hardly have time to read the complete news so what we have to do is just go through the headlines and satisfy ourselves with that. As a result, we might miss a part of the news or misinterpret the complete thing. The situation is even worse for the people who are visually impaired or have lost their ability to see. The inability of these people to read text has a huge impact on their lives. There are a number of methods for blind people to read the text. Braille script, in particular, is one of the examples, but it is a highly inefficient method as it is really time taking and requires a lot of practice. So, we present a method for visually impaired people based on the sense of sound which is obviously better and more accurate than the sense of touch. This paper deals with an efficient method to summarize news into important keywords so as to save the efforts to go through the complete text every single time. This paper deals with many API's and modules like the tesseract, GTTS, and many algorithms that have been discussed and implemented in detail such as Luhn's Algorithm, Latent Semantic Analysis Algorithm, Text Ranking Algorithm. And the other functionality that this paper deals with is converting the summarized text to speech so that the system can aid even the blind people.

Sat Aug 07 2021
Artificial Intelligence
Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning
Screen2Words is a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase. Summarizing mobile screens requires a holistic understanding of the multi-modal data of mobile UIs.
3
5
36
Mon May 17 2021
NLP
Multi-Modal Image Captioning for the Visually Impaired
Up to 21% of the questions asked by blind people about the images they click pertain to the text present in them. We propose altering AoANet, a state-of-the-art captioning model, to leverage the text detected in the image as an input feature.
4
1
0
Sun Oct 18 2020
Artificial Intelligence
Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model
A neural model for automatically generating natural language summaries for charts. The generated summaries provide an interpretations of the chart and convey the key insights found within that chart.
0
0
0
Thu Dec 27 2018
Computer Vision
Chart-Text: A Fully Automated Chart Image Descriptor
Chart-Text is a fully automated system that creates textual description of chart images. The system achieves an accuracy of 99.72% in classifying the charts and an accuracy of 78.9% in extracting the data.
0
0
0
Sun Nov 25 2018
Artificial Intelligence
A Survey of Mobile Computing for the Visually Impaired
The number of visually impaired or blind (VIB) people in the world is estimated at several hundred million. This paper provides a survey of machine-learning based mobile applications and identifies the most relevant applications.
0
0
0
Tue Mar 24 2020
Computer Vision
TextCaps: a Dataset for Image Captioning with Reading Comprehension
0
0
0