Home > Computer science essays > Design “Air-Writing Recognition” system

Essay: Design “Air-Writing Recognition” system

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 10 minutes
  • Price: Free download
  • Published: 4 November 2022*
  • Last Modified: 29 September 2024
  • File format: Text
  • Words: 2,794 (approx)
  • Number of pages: 12 (approx)

Text preview of this essay:

This page of the essay has 2,794 words.

CHAPTER 1

INTRODUCTION

1.1 OBJECTIVE OF PROJECT

The main aim of the project is to design “Air-Writing Recognition”. The project is aimed at evaluating the performance of an operating system. Before delving into its implementation, an introduction is needed to the parts involved in the project. The whole report is centered around the use of Linux to run applications on them. Hence an introduction using Linux as an OS in them is provided.

The objectives of this project are as follows:

• To get the QT for Embedded Linux C++ framework successfully compiled.

• To configure the Linux distribution, Debian Etch that comes with the SBC in such a way that X11 GUI service will be removed and the bare minimum services started.

Minimum Requirements:

The following are the minimum requirements for this project:

 Software requirements

• Linux OS (ubuntu-12.2)

• QT creator( 4.8 )

• Application language: Python

1.2 EXISTING SYSTEM

In existing framework we are utilizing raspberry pi board, camera, SD card, control supply, HDMI link and show unit. By utilizing this camera we can’t compose any content since it will catch the picture outline by edge and processor speed is low. Advanced pen with direction acknowledgment should be possible by utilizing accelerometer. The computerized pen comprises of a tri-hub accelerometer, a microcontroller, and a RF remote transmission module for detecting and gathering increasing velocities of penmanship and motion directions. Our inserted venture initially removes the time-and recurrence space highlights from the quickening signs and, at that point transmits the signs by utilizing RF transmitter. In recipient segment RF signs can be gotten by RF collector and given to smaller scale controller. The controller forms the data lastly the outcomes can be shown on Graphical LCD.

1.3 PROPOSED SYSTEM

In proposed system, we are going to use Linux software, camera and display unit. Here, we are using pen or hand gesture in front of the camera then whatever we are going to draw in front of it will be displayed on the display unit. In our Linux operating system we are using Qt-creator as a tool which is software development key and it is also user friendly, this is mainly developed for GUI. To read image processing and Linux commands we are using a Open CV library. Users can use the pen to write digits or make hand gestures etc can be displayed on the display unit.

1.4 BLOCK DIAGRAM

Fig 1.1 Block Diagram

1.5 ORGANIZATION OF THESIS

Chapter 1 describes about aim and introduction. Chapter 2 describes about literature survey. Chapter 3 describes about hardware components used in the project which includes brief study about embedded systems and UVC driver. Chapter 4 describes about software tools required for the project which includes Qt embedded framework, information about Qt creator, GCC compiler, Open CV and also describes LINUX operating system, basic commands and OS installation. Chapter 5 is about implementation and deployment. Project result and conclusion is attached in rest of chapters.

CHAPTER 2

LITERATURE SURVEY

With embedded systems fast expanding its reach, subject matter related to this field is available in abundance. While working on this project we have studied matter from various sources such as books, online articles and reference manuals. The knowledge gained from this activity has been of great help to us in understanding the basic concepts related to our project and has ignited further interest in this topic.

“Linux for Embedded and Real time Applications”, by Doug Abbott has been of great help in providing an introduction to the process of building embedded systems in Linux. It has helped us understand the process of configuring and building the Linux kernel and installing tool chains.

We understood the preponderance of the ARM processors in the field of embedded systems and the features of ARM processors from the document “The ARM Architecture” by Leonid Ryzhyk. The ARM architecture is a confluence of many useful features that makes it better than other peer processors. Being small in size and requiring less power, they prove useful in providing an efficient performance in embedded applications.

LEAP MOTION:

The Leap Motion controller is a little USB periphery device which is planned to be put on a physical desktop, defying upward. It can similarly be mounted onto a virtual reality headset. Utilizing two monochromatic IR cameras and three infrared LEDs, the gadget watches a generally hemispherical region, to a separation of around 1 meter. The LEDs produce design less IR light and the cameras create right around 200 casings for each second of reflected information.

This is then sent through a USB link to the host PC, where it is broke down by the Leap Motion programming utilizing “complex maths” in a way that has not been uncovered by the organization, somehow incorporating 3D position information by contrasting the 2D outlines created by the two cameras. In a recent report, the general normal precision of the controller was appeared to be 0.7 millimeters.

The littler perception zone and higher determination of the gadget separates the item from the Kinect, which is more appropriate for entire body following in a space the extent of a living room.[30] In an exhibition to CNET, the controller was appeared to perform undertakings, for example, exploring a site, utilizing squeeze to-zoom motions on maps, high-exactness drawing, and controlling complex 3D information representations.

FEATURE PROCESSING AND MODELLINF FOR GESTURE RECOGNITION:

A 6D movement motion is spoken to by a 3D spatial direction and expanded by another three measurements of introduction. Utilizing distinctive following advances, the movement can be followed unequivocally with the position and introduction or verifiably with the quickening and precise speed. In this work, we address the issue of movement signal acknowledgment for order and-control applications. Our fundamental commitment is to explore the relative viability of different element measurements for movement motion acknowledgment in both client ward and client free cases.

We present a factual element based classifier as the gauge and propose a HMM-based recognizer, which offers greater adaptability in highlight determination and accomplishes preferred execution in acknowledgment exactness over the pattern framework. Our movement motion database which contains both express and certain movement data enables us to think about the acknowledgment execution of various following signs on a shared conviction. This examination additionally gives a knowledge into the achievable acknowledgment rate with various GPS beacons, which is significant for the framework creator to pick the best possible following innovation.

CHAPTER 3

HARDWARE COMPONENTS

The undertaking is gone for assessing the execution of a working framework on an inserted framework. Before diving into its execution, an acquaintance is required with the parts associated with the venture. The whole report is centered around the field of embedded systems and the use of Linux to run applications on them. Hence an introduction to Embedded Systems is provided.

3.1 EMBEDDED SYSTEMS

An embedded system is a special purpose computer system that is designed to perform very small sets of designated activities. Embedded systems date back as early as the late 1960s where they used to control electromechanical telephone switches. The first recognizable embedded system was the Apollo Guidance Computer developed by Charles Draper and his team. Later they found their way into the military, medical sciences, aerospace and auto mobile industries. Today they are widely used to serve various purposes like:

• Network equipment such as firewall, router, switch, and so on.

• Consumer equipment such as MP3 players, cell phones, PDAs, digital cameras, camcorders, home entertainment systems and so on.

• Household appliances such as microwaves, washing machines, televisions etc

• Mission-critical systems such as satellites and flight control.

The key factors that differentiate an embedded system from a desktop computer:

• They are cost sensitive.

• Most embedded systems have real time constraints. There are multitudes of CPU architectures such as ARM, MIPS & Power PC that are used in embedded systems.

• Application-specific processors are employed in embedded systems.

• Embedded Systems require very few resources in terms of ROM or other I/O devices as compared to a desktop computer.

3.2 UVC DRIVER

A UVC (or Universal Video Class) driver is a USB-category driver. A driver enables a device, such as your webcam, to communicate with your computer’s operating system. And USB (or Universal Serial Bus) is a common type of connection that allows for high-speed data transfer. Gadgets that are outfitted with an UVC driver, for example, the Logitech Quick Cam Pro 9000 for Business, are equipped for spilling video.

At the end of the day, with an UVC driver, you can just connect your webcam to your PC and it’ll be prepared to utilize. What does a UVC driver have to do with my webcam being plug and play? It is the UVC driver that enables the webcam to be plug and play. A webcam with a UVC driver does not need any additional software to work. Once you plug your webcam in, it can work with a video-calling application, such as Skype, Windows Live Messenger, or Microsoft Office Communicator.

Are there different kinds of webcam drivers? Yes, there are two kinds of webcam drivers:

• The one included with the installation disc that came with your product. For your webcam to work properly, this driver requires some time to install. It is specifically tuned for your webcam, designed by your webcam manufacturer and optimized for webcam performance.

• A UVC driver

You can only use one driver at a time, but either one will allow you to use your webcam with various applications. The following Logitech webcams support UVC

• Logitech Quick Cam Pro 9000 for Business

• Logitech Quick Cam Pro for Notebooks Business

• Logitech Quick Cam Communicate MP for Business

• Logitech Quick Cam Deluxe for Notebooks Business

• Logitech Quick Cam 3000 for Business

Does my computer support UVC, Yes Most current operating systems support UVC. Although UVC is a relatively new format, it is quickly becoming common.

There are more environments that support UVC, but what follows is a listing of the most common:

Windows:

• Windows XP Service Pack 2 and higher

• Windows Vista

Mac:

• Tiger OS (versions 10.4.9 and higher)

• Leopar OS (versions 10.5.1 and higher)

Linux:

• There are many different versions of Linux. Please check with your distribution vendor to determine if your version supports UVC.

How does a webcam work?

A webcam is a minimized computerized camera you can attach to your PC to communicate video pictures continuously (as they happen). Much the same as a computerized camera, it catches light through a little focal point at the front utilizing a modest network of minuscule light-finders incorporated with a picture detecting microchip (either a charge-coupled gadget (CCD) or, more probable nowadays, a CMOS picture sensor). As we’ll find in a minute, the picture sensor and its hardware changes over the photo before the camera into computerized organize a series of ones that a PC knows how to deal with.

Not at all like an advanced camera, a webcam has no worked in memory chip or blaze memory card: it doesn’t have to “recollect” pictures since it’s intended to catch and transmit them promptly to a PC. That is the reason webcams have USB links returning out of the. The USB link supplies energy to the webcam from the PC and takes the computerized data caught by the webcam’s picture sensor back to the PC—from where it flies out on to the Internet.

Photograph: Unlike the webcam above, which you can center by turning its focal point, this Microsoft LifeCam VX-800 has a preset core interest. In the event that you look carefully, you can simply observe the power marker light (upper left, not presently lit up) and the receiver (upper right). The stand can just lay on a table or open up to cut over your portable workstation.

How does an image sensor chip work?

All webcams work in comprehensively a similar way: they utilize a picture sensor chip to discover moving pictures and change over them into floods of digits that are transferred over the Internet. The picture sensor chip is the core of a webcam—so how does that bit work? How about we dismantle a webcam and discover.

Take the external case off a webcam and you’ll think that its little more than a plastic focal point mounted specifically onto a minor electronic circuit board underneath. The focal point sinks and out to build its central length, controlling the concentration of your cam: Presently take the focal point off and you can see the picture sensor (CCD or CMOS chip): it’s the square thing amidst this circuit. Just the modest, green-shaded focal part is light-touchy: whatever is left of the chip is worried about interfacing the light identifier to the greater circuit that encompasses it:

Here’s a close up:

Webcams versus computerized cameras .So the picture sensor is the “electronic eye” of a webcam or an advanced camera. It’s a semiconductor chip made of a large number of modest, light-touchy squares organized in a network design. These squares are called pixels. Essential webcams utilize generally little sensors with only a couple of hundred thousand pixels (ordinarily a lattice of 640 × 480. Great computerized cameras utilize sensors with numerous more pixels; that is the reason cameras are looked at by what number of megapixels (a huge number of pixels) they have.

A fundamental webcam has around 0.3 megapixels (300,000, as such), while a computerized camera with 6 megapixels has more than 20 times all the more—most likely masterminded in a rectangle with three thousand crosswise over and two thousand down (3000 x 2000 = 6 million). A superior camera appraised at 12 megapixels would have a 4000 x 3000 pixel sensor. Take a photograph a similar size with those two cameras and the 12 megapixel one will give you 1000 more specks on a level plane and 1000 all the more vertically—littler spots giving more detail and higher determination. A solitary pixel in a better than average sensor is something like 10 micrometers (10μm) in width (5– 10 times littler than the distance across of a run of the mill human hair)!

How does an image sensor convert a picture into digital form?

When you take a computerized photograph or gaze into your webcam, light zooms into the focal point. This approaching “picture” hits the picture sensor, which splits it up into singular pixels that are changed over into numeric shape. CCDs and CMOS chips, the two sorts of picture sensor, carry out this employment in somewhat extraordinary ways. Both at first change over approaching light beams into power, much like photoelectric cells (utilized as a part of things like “enchantment eye” gatecrasher alerts or restroom washbasins that switch on consequently when you put your hands under the fixture). Yet, a CCD is basically a simple optical chip that proselytes light into differing electrical signs, which are then passed on to at least one different chips where they’re digitized (transformed into numbers). By differentiate, a CMOS chip does everything in one place: it catches light beams and transforms them into computerized flags all on the one chip. So it’s basically a computerized gadget where a CCD is a simple one. CMOS chips work quicker and are less expensive to make in high volume than CCDs, so they’re presently utilized as a part of most ease cellphone cameras and webcams. In any case, CCDs are still broadly utilized as a part of a few applications, for example, low-light space science.

Regardless of whether pictures are being produced by a CMOS sensor or a CCD and other hardware, the fundamental procedure is the same: an approaching picture is changed over into an active example of advanced pixels. We should simply allude to “the picture sensor” starting now and into the foreseeable future (and disregard whether it’s a CCD and different chips or a CMOS sensor). To start with, the picture sensor measures how much light is touching base at every pixel. This data is transformed into a number that can be put away on a memory chip inside the camera. Along these lines, taking a computerized photo changes over the photo you see into a long series of numbers. Each number portrays one pixel in the picture—how splendid or dim and what shading it is.

Step by step:

1. Light from the object (in this case, a bicycle) enters the camera lens.

2. The image sensor inside the camera splits the image up into millions of pixels (squares). An LCD display on the back of the camera shows you the image that the sensor is capturing—not an image of the object seen through a series of lenses (as with a conventional camera), but a redrawn, computerized version of the original object displayed on a screen.

3. The sensor measures the color and brightness of each pixel.

4. The color and brightness are stored as binary numbers (patterns of zeros and ones) in the camera’s memory card. When you connect your camera to a computer, these numbers are transmitted instantly down the wire.

2017-9-1-1504281906

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Design “Air-Writing Recognition” system. Available from:<https://www.essaysauce.com/computer-science-essays/design-air-writing-recognition-system/> [Accessed 19-11-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.