Given here are various topics that will shed light for you to choose and portray our expertise. You are also welcome to suggest your own topics where we will assist you in submitting your thesis and publication on journals.
This paper focuses on extracting the individual speech signals from an artificially mixed speech signals. The number of persons that are involved are two. During training of the Backpropagation algorithm (BPA) and Radial Basis function (RBF), two inputs are used in the input layer and two outputs are used in the output layer. The performance of the RBF is much superior to BPA in terms of recovering the original signals. For a given speech samples (8000hz), the satisfactory recovery of original speech signals is only 90%. Complete recovery of original speech is still a difficult task.
Image coding methods based on adaptive wavelet transform and those employing zerotree quantization have been shown to be successful in recent years. We present a general zerotree structure for arbitrary wavelet packet geometry in an image-coding framework. A fast basis selection algorithm, which uses a Markov chain, based cost estimate of en-coding the image using this structure is developed. As a result, our adaptive wavelet zerotree image coder has a relatively low computational complexity, performs comparably to the state-of-the-art image coders, and is capable of progressively encoding images.
Image coding methods based on adaptive wavelet transform and those employing zerotree quantization have been shown to be successful in recent years. In this work adaptive wavelet packets along with zero tree embedded coding is implemented. An image in decomposed to a maximum of 6 levels for form wavelet packets. Shannon entropy coding is used for calculating the wavelet packet energy and finally the best tree is chosen. The best tree is presented to ezw for compression. After transmission of the data, decoding by ezw is done. The decoded data undergoes reconstruction of image takes place.
Conventional remote password authentication schemes allow a serviceable server to authenticate the legitimacy of a remote login user. However, these schemes are not used for multiserver architecture environments. To amend these problems a new authentication scheme that uses a neural network is used for multiserver architecture environments. This password authentication system is a pattern classification system based on an artificial neural network. In this scheme, the users only remember user identity and password to log into various servers.
Blind deterministic estimation of the orthogonal frequency division multiplexing (OFDM) frequency offset via oversampling is proposed in this paper. This method utilizes the intrinsic phase shift of neighboring sample points incurred by the frequency offset that is common among all subcarriers. The proposed method is data efficient—it requires only a single OFDM symbol to achieve reliable estimation, hence making it more suitable to systems with stringent delay requirement and mobility-induced channel variation.
The proposed scheme is devised to perfectly retrieve frequency offset in the absence of noise. Quite remarkably, we show that in the presence of channel noise, this intuitive scheme is indeed the maximum likelihood estimate of the carrier frequency offset. The possible presence of virtual carriers are also accommodated in the system model, and some interesting observations are obtained. The Cramér–Rao lower bound is derived for the oversampling-based signal model, and we show through numerical simulation that the proposed algorithm is efficient. Practical issues such as identifiability,the front-end filter bandwidth, and the possible presence ofcorrelated noises are also carefully addressed.
This project involves in recognizing English characters A-Z, a-z, and numerals 0-9. The various concepts like population generation, reproduction, cross over and mutation of genetic algorithm gives solution for the character recognition problem in less number of generations when compared to the size of the chromosomes which is about 350 and size of the population which is about 50.
Ultrasonic measurements of human carotid and femoral artery walls are conventionally obtained by manually tracing interfaces between tissue layers. The drawbacks of this method are the interobserver variability and inefficiency. In this work, we present a new automated method which reduces these problems. By applying a multiscale dynamic programming (DP) algorithm, approximate vessel wall positions are first estimated in a coarse-scale image, which then guide the detection of the boundaries in a fine-scale image. In both cases, DP is used for finding a global optimum for a cost function. The cost function is a weighted sum of terms, in fuzzy expression forms, representing image features and geometrical characteristics of the vessel interfaces. The weights are adjusted by a training procedure using human expert tracings. Operator interventions, if needed, also take effect under the framework of global optimality. This reduces the amount of human intervention and, hence, variability due to subjectiveness. By incorporating human knowledge and experience, the algorithm becomes more robust. A thorough evaluation of the method in the clinical environment shows that interobserver variability is evidently decreased and so is the overall analysis time. We conclude that the automated procedure can replace the manual procedure and leads to an improved performance.
We present a contextual clustering procedure for statistical parametric maps (SPM) calculated from time varying three-dimensional images. The algorithm can be used for the detection of neural activations from functional magnetic resonance images (fMRI). An important characteristic of SPM is that the intensity distribution of background (non active area) is known whereas the distributions of activation areas are not. The developed contextual clustering algorithm divides an SPM into background and activation areas so that the probability of detecting false activations by chance is controlled, i.e., hypothesis testing is performed. Unlike the much used voxel-by-voxel testing, neighborhood information is utilized, an important difference.
This program involves the recognition of the given characters. The characters are joined and treated as a word when a space is encountered. The words are string matched with thesaurus library and valid words are taken into account. Thus many such words will provide a sentence.
Artificial neural network technique involving back propagation algorithm is used. This is a supervised learning approach requiring inputs and target outputs. A concept called training the artificial neural network with the inputs and target outputs is done by using initial random weights. The training of Artificial Neural Network is stopped once the objective function is reached, that is mean squared error is reached. At this point the final weights are stored in a file.
There is a process called testing the ANN. During this time a pattern is presented to the ANN and processed with the final weights. The output of the ANN indicates a character. The process proceeds and when a space is encountered, word is formed and string matched with the available thesaurus library.
This project is concerned with the “Direct Search Method for Solving Economic Dispatch Problem”. The method adopted in this project is very efficient in determining the efficient power generation with minimum fuel consumption. The minimum consumption of fuel will lead to minimum expenditure and satisfying the load demand.
An efficient and practical approach for solving the economic dispatch(ED) problem considering transmission capacity constraints is developed. Direct search method (DSM) is chosen to handle a number of inequality and equality constraints and units with any kind of fuel cost functions.
Microcalcification clusters which are an early sign of breast cancer appear as isolated bright spots in mammograms. Therefore they correspond to local maxima of the image. The local maxima of the image is first detected and they are ranked according to a higher-order statistical test performed over the subband domain data. The first step of our method is the detection of the local maxima of the image. After detecting the maxima locations we rank them according to a higher order statistical test performed over the sub domain data obtained from the adaptive wavelet transform. The distribution of wavelet data corresponding to the regular breast tissue is almost Gaussian. However, MCs arc different in nature than regular breast tissue and they produce outliers in the subband domain. We take advantage of this fact and rank the local maxima according to a higher-order statistical test estimated in the neighborhood of each local maxima. When the data is Gaussian the test statistics becomes zero. The higher the value of the test, the higher the rank of the maxima. Peaks due to MCs receive high ranks. The maxima due to small variations in the pixel values and smooth edges became low ranks.
Satellite images are useful for creating updated land cover maps. But the major problem in these images is that the region below the clouds are not covered by the sensor. Hence cloud detection and removal is very vital in the processing of satellite imagery. The objective of this paper is to propose an approach for automatic detection and removal of cloud and its shadow contamination from Satellite Images. After detection and removal of the contamination the method will selectively replace the data from different images of the same area to minimize the cloud contamination effect. Detection is achieved by performing two cloud segmentation algorithms namely Average brightness thresholding (ABT) algorithm and Region growing algorithm. Finally the performance of these two algorithms are compared to detect the exact cloud region. This is followed by the detection of the corresponding shadow pair. Finally the detected cloud contamination is removed and replaced with the data from different images of the same area. The algorithms were tested using multispectral ASTER (Advanced Space Borne Thermal Emission and Reflection Radiometer) and LISS III data. The procedure is computationally efficient and hence could be very useful in providing improved weather forecast, land cover and analysis products.
Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to “intelligent” information retrieval and indexing. Information science researchers have turned to other newer artificial-intelligence-based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. These newer techniques, which are grounded on diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information storage and retrieval systems.
Knowledge intensive organizations have vast array of information contained in large document repositories. With the advent of E-commerce and corporate intranets/extranets, these repositories are expected to grow at a fast pace. This explosive growth has led to huge, fragmented, and unstructured document collections. Although it has become easier to collect and store information in document collections, it has become increasingly difficult to retrieve relevant information from these large document collections. The work addresses the issue of improving retrieval performance in terms of precision and recall for retrieval from document collections.
There are three important paradigms of research in the area of information retrieval (IR): Probabilistic IR, Knowledge-based IR, and, Artificial Intelligence based techniques like neural networks and symbolic learning. Very few researchers have tried to use evolutionary algorithms like genetic algorithms (GA's). Previous attempts at using GA's have concentrated on modifying document representations or modifying query representations. This work looks at the possibility of applying GA's to adapt various matching functions. It is hoped that such an adaptation of the matching functions will lead to a better retrieval performance than that obtained by using a single matching function. An overall matching function is treated as a weighted combination of scores produced by individual matching functions. This overall score is used to retrieve documents. Weights associated with individual functions are searched using Genetic Algorithm.
In the modern era, to maintain proper law and order situation police personnel play an important role. The hundred percent help from the Forensic Science department is required by the police personnel to nab the culprit. Forensic deals with comparing and identifying fingerprints of unknown criminals with the standard fingerprint database.
The images of the fingerprints are captured by using photographic camera and are converted to digital form by using a scanner. The images are processed to enhance the inherent features useful for subsequent analysis. Descriptors of the images are chosen such that they are invariant to scaling, translation and rotation of the images. The components are recognized by comparing the descriptors calculated from the acquired image with those calculated from the original image.
Image processing algorithm has been used to find the invariant moments to various orientation of the photograph. The moments are used as the parameters to train an artificial neural network. An artificial neural network (ANN) is one whose outputs are model independent of their inputs. To train the ANN, back-propagation algorithm is used. The back-propagation algorithm is based on the steepest descent method. This algorithm will train fingerprint data and gives final sets of weights. The final sets of weights obtained through back-propagation are used for testing new fingerprint.
Facial recognition plays important role in the field of forensic science. Different methods have been proposed by many researchers. In the present day, intelligent facial recognition is very much required. This is due to the fact that, facial masks are used by culprits. After extracting features of a face, intelligent algorithms like artificial neural network can be used to correctly identify a true face. The neural network algorithm proposed is radial basis function (RBF). The inputs for this function is the outputs obtained after processing the following methods especially, principle component analysis, Fisher’s Linear Discriminant function(FLD).
This work explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. It is observed that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters.
These parameters are intuitively related to the motion of facial features during facial expressions. It can be shown how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.
Our project is to recognize a real two-dimensional (2-D) or three-dimensional (3-D) objects from 2-D intensity image, with the application of genetic algorithms (GAs). Our approach is model-based (i.e., we assume a predefined set of models), while the recognition strategy lies on the recently proposed theory of Algebraic Functions of Views (AFoVs). According to this theory, the variety of 2-D views depicting an object can be expressed as a combination of a small number of 2-D views of the object. This implies a simple and powerful strategy for object recognition: novel 2-D views of an object (2-D or 3-D) can be recognized by simply matching them to combinations of known 2-D views of the object. This is an important idea, which is also supported by psychophysical findings indicating that the human visual system works in a similar way. This problem can be solved either in the space of feature matches among the views (“image space”) or the space of parameters (“transformation space”). In general, both of these spaces are very large, making the search very time consuming, we propose using GAs to search these spaces efficiently. The effectiveness of the GA approaches is shown on a set of increasingly complex real scenes where exact and near-exact matches are found reliably and quickly.
The work presents an unified approach to human activity capturing and recognition. The information for experiment is in afc format. The .asf format which gives skeleton information and .amc format which gives information about relative movement respectively. The above information is converted into a set of metrics which are then analyzed using continuous density Hidden Markov Model (HMM) for characterizing the dynamic change in the motion parameters and experimental results were obtained. Artificial Neural Network is introduced for intelligence.
The main objective of the project is to develop a harmonic elimination algorithm which will be able to identify harmonics that occur in an electrical circuit. Due to this harmonics, there will be energy loss. The algorithm is developed for detecting the harmonics and suppress them from the system, so that the power losses and hence the energy losses can be minimized. A digital harmonic elimination scheme for the circuit using Wavelet Transform with Multi Resolution Analysis (MRA) is developed in this project. For extraction of fundamental waveform from the distorted waveform, Wavelet Transform algorithm has been applied.
Image compression is the process of reducing the size of a image file into a smaller size. This helps in transporting the image from one place to another place. For example in world wide web, copying in compact disc, etc. Different methods have been tried by the researchers over a period of time to properly compress the image, safely transport from one place to another place and properly decompress the image to get back its original size. During the process of complete decompression, there should not be changes in the contents of the image, minimum loss in the quality of image. The quality of image can be thought of retaining the maximum luminance (brightness) and chrominance (color) properties.
Different conventional methods such as lossy and lossless image compression methods are available. In spite of this availability, research work is carried out in compression of image with intelligent methods for sophisticated applications. One of the intelligent methods is supervised algorithms. Back-propagation (BP) is one such supervised method that can be tried to implement in compressing and decompressing the image.
BP algorithm is a supervised method that requires inputs and outputs for the network to learn the contents of the image. After learning, a set of final weights is created, which will be further used during the recovery of the original image
Image restoration is a vital part of many image processing applications. The purpose of image restoration is to recovery the original image from a degraded observation. In many instances, the degraded observation g(x,y) can be modeled as the two-dimensional convolution of the true image f(x,y) and the point-spread function (also called the blurring function) h(x,y) of a linear shift-invariant system plus some additive noise n(x,y). That is, g(x,y) = f(x,y)*h(x,y) + n(x,y)
In many situations, the point-spread function h(x,y) is known explicitly prior to the image restoration process. In these cases, the recovery of f(x,y) is known as the classical linear image restoration problem. This problem has been thoroughly studied and a long list of restoration methods for this situation includes numerous well-known techniques, such as inverse filtering, Wiener filtering, least-squares filtering, etc.
However, there are numerous situations in which the point-spread function is not explicitly known, and the true image f(x,y) must be identified directly from the observed image g(x,y) by using partial or no information about the true image and the point-spread function. In these cases, we have the more difficult problem of blind deconvolution. Several methods have been proposed for blind deconvolution and active research continues in this area. In our project, we seek to understand and implement the proposed algorithms for blind deconvolution: the iterative blind deconvolution algorithm
The project involves in developing iterative blind deconvolution for applying to corrupted images and recover the original image. Images are corrupted owing to the inherent noise developed in the electronic circuits of scanner, camera and through transmissions.The original image can be recovered by de-noising the corrupted image. To achieve 100% recovery of the original image, an assumption will be made that the process of iteration can be implemented to recover the original image.
The project involves an example of recovering the original words or information and pictures from the corrupted image. Subsequently the characters to be separated wheather the character is torn,tear or not .Still we can think of recovery a characters by Dictionary concepts.The powerful scanner that can scans and given image and identify the character from the image should be used as the input to our project. As part of our project we can add noise to the image we can use existing techniques for removing noise or Blind Deconvolution.
An Intrusion Detection System (IDS) is a program that analyzes what happens or has happened during an execution and tries to find indications that the computer has been misused. . Due to increasing incidents of cyber attacks, building effective intrusion detection systems are essential for protecting information systems security. Soft computing techniques are increasingly being used for problem solving. Among several soft computing techniques neural network and fuzzy logic are incorporated into the system to achieve robustness and flexibility Here two soft computing techniques Artificial Neural Networks and Fuzzy logic are used to build Intrusion Detection System. We demonstrate the feasibility of our method by performing several experiments on KDD Intrusion Detection System competition dataset. The experimental results show that the neural network and fuzzy classifier successfully detects and classifies various types of Ultrasonic measurements of human carotid and femoral artery walls are conventionally obtained by manually tracing interfaces between tissue layers. The drawbacks of this method are the inter observer variability and inefficiency. In this paper, we present a new automated method which reduces these problems. By applying a multi scale dynamic programming (DP) algorithm, approximate vessel wall positions are first estimated in a coarse-scale image, which then guide the detection of the boundaries in a fine-scale image. In both cases, DP is used for finding a global optimum for a cost function. The cost function is a weighted sum of terms, in fuzzy expression forms, representing image features and geometrical characteristics of the vessel interfaces. The weights are adjusted by a training procedure using human expert tracings. Operator interventions, if needed, also take effect under the framework of global optimality. This reduces the amount of human intervention and, hence, variability due to subjectiveness. By incorporating human knowledge and experience, the algorithm becomes more robust. A thorough evaluation of the method in the clinical environment shows that interobserver variability is evidently decreased and so is the overall analysis time. We conclude that the automated procedure can replace the manual procedure and leads to an improved performance.
Image processing has broad spectrum of applications such as Remote sensing and weather forecasting via satellites, Image transmission in broadcast Television, Facsimile transmission of graphic documents over telephones lines, Storage for business applications, medical processing such as processing X-ray images to detect fractures etc.
All these applications need
These needs can be catered by using Image Compression Techniques. Image compression is the art/science of efficiently coding digital images to reduce the number of bits required in representing an image. Image compression is achieved by exploiting redundancies in the image. The purpose of this project is to provide a different and an easy approach for compressing and decompressing an image. Out of various ANN’s ART-Adaptive Resonance Theory is taken into consideration for image compression and decompression because of its ability to overcome stability-plasticity dilemma.
In this work, the use of an adaptive regularization parameter in constrained deconvolution method was considered. Since the human visual system favors the detection of edges and boundaries, rather than more subtle differences in intensity in homogeneous areas, noise artifacts may be less disturbing in high contrast regions than in low contrast regions. It is then advantageous to use a stronger constraint in smooth areas of an image than in high contrast regions. While traditional restoration methods find it difficult to implement an adaptive restoration spatially across an image, neural-network-based image restoration methods are particularly amenable to spatial-variance of the restoration parameters.
This work presents a neural-network–based-algorithm able to adaptively vary its restoration parameters spatially across the image to be restored. It is concentrated more on varying the regularization parameter to take into account high and low contrast regions of the image.
A method based on using local image statistics to select the optimal value of regularization parameter is considered.
This method imitates the human visual system that produces superior results when compared to non-adaptive methods. It is shown that the adaptive regularization techniques can compensate for an insufficiently known degradation in the case of a spatially variant distortion. Moreover an adaptive spatially variant restoration is shown to be able to be completed in the same order of magnitude of time as a much simpler non-adaptive spatially invariant distortion.
This paper explores the use of local parameterized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion.
An architecture of a surface acoustic wave (SAW) processor based on an artificial neural network is proposed for an automatic recognition of different types of digital passband modulation. Three feed-forward networks are trained to recognize filtered and unfiltered binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals, as well as unfiltered BPSK, QPSKsignals. Performance of the processor in the presence of additive white Gaussian noise (AWGN) is simulated. The influence of second-order effects in SAW devices, phase, and amplitude errors on the performance of the processor also is studied.
One of the most promising approaches to the problem of automatic modulation recognition is the use of artificial neural networks (ANN). In the majority of practical cases the ANN-based modulation recognition system, includes a pre-processing stage extracting such features as some statistical moments of the signal, its instantaneous frequency, the phase and the power spectrum . The pre-processor is followed by the neural network, which is trained to make a decision based upon the features that have values specific for each modulation scheme. The ANN approach has important advantages. The flexibility that offers the ANN architecture is one of them. The ANN can be realised in many different ways using either hardware or software. One more positive side of the ANN approach is the higher reliability that can be achieved in automatic modulation recognition However, there are some disadvantages of the traditional ANN architecture based on feature extraction. It is difficult to extract the feature in real time due to considerable amount of calculations needed to process the raw data.
Two important issues in manufacturing are process modeling and optimization. The manufacturing processes are characterized by multiplicity of dynamically interacting process variables. Surface roughness has been one of most important factor to predict machining performances of any machining operation. Most surface roughness prediction models are empirical and are generally based on experiment in the laboratory. In addition, it is very difficult in practice, to keep all factors under control as required to obtain reproducible results. General: these models have a complex relationship between surface roughness operational parameters and work materials.
The project involves in identifying the optimum cutting parameters for wirecut electrical discharge machining using Genetic Algorithm. The various parameters that are involved are speed of the wire traveling, speed of the job voltage, current, time of the spark start and time of the spark end. These parameters will decide the life of the wire used for cutting.
The desire to make computers think, as human being do, resulted in the development of artificial intelligence and artificial neural networks (ANN). An artificial neural network has been used to train the data off-line. The weight updating algorithms developed for the ANN is based on back-propagation algorithm.
The glass fibre reinforced concrete (GFRP) is considered as the work piece material and the carbide tool is used . Different speed, feed, depth of cut were considered for machining the material. Tool dynamometer for measuring axial force, radial force and tangential force were measured. The flank wear land width was measured using tool maker’s microscope.
Training of the patterns were done using back-propagation algorithm (BPA). This involves supervised training that consists of both the input and target patterns. Based on the mean squared error (MSE), the ANN converges and set of final weights are stored in the hard disk for further testing of the data.
The project titled “Modeling and optimization of tool wear prediction of turning process using Genetic Algorithm” has been taken up with a view to bring out the benefits in developing a model for foot wear prediction in terms of speed, feed rate, depth of cut and hardness for turning process.
Genetic Algorithms (GA) are non-conventional computational optimization schemes. They yield global optimum unlike the traditional optimization procedures, which gives local optimum. This study concentrates on a genetic algorithm with selection using Roulette wheel and reproduction, using cross over the mutation operators.
The final model can be used accurately to predict the flank wear in future production thus reducing cost of experimentation and time.
The project work involves forecasting the amount of electrical power required in future. The term future refers to hour, next day, next week,, next month of next year.. An artificial neural network (ANN) has been used to learn the previous power demand for the past few years. Based upon the learning of the previous data, forecasting is done by using ANN.
The thesis focuses on speaker identification procedure using linear predictive coding in a communication channel. Words uttered by persons were recorded in a landline telephone. Linear predictive codeing (LPC) parameters were obtained for all these words. The above process is treated as training. Subsequently the speeches of the same persons were recorded in a mobile phone and the wave files are transferred to PC. LPC parameters were found and compared with the LPC parameters obtained from the speech of landline telephone. Dissimilarity measures were obtained. Some people were considered as authenticated persons and the remaining people were treated as imposter. False acceptance rate that indicate the number of imposters accepted and false rejection rate for the authorized persons. The equal error rate for different combinations of accepted person and imposter were analysed
Speech is the result of a tone varying vocal tract system. The objective of human speech is to convey messages. Speech signal not only carries the message but also encompasses into it, the identity of the speaker, the language, the physical and emotional state of the speaker, etc. Of these, recognizing the speaker from his/her speech signal receives significant attention as it can be put to use in many practical applications where there is need for secure to personal data or in any security based system. The physiological and behavioural characteristics of a speaker help us to identify a speaker. The variations in sizes and shapes of the vocal tracts, vocal cords, velums and nasal cavities (physiological characteristics) and their movements (behavioural characteristics) during speech production create differences in the spectra of speech signals. The phoneme (basic unit of sound) articulation and the phoneme_phoneme transition, pitch contours, etc. are referred to as “low_level”behavioural information. “Segmental” information refers to the information extracted from individual speech sounds or phones and “super_segmental” information refers to speech phenomena involving consecutive phones. “High_level” behavioural information takes into account characteristic words or phrases of a speaker and also speaking styles.
The process of recognizing the speaker from the speaker’s voice is called Speaker Recognition (SR). SR can be divided into two types: Speaker Verification (SV) and speaker Identification(SI). In SV, an unknown speaker makes an identity claim. The unknown speaker’s utterance is compared with an already trained utterance of the speaker whose identity is being claimed and based on a decision logic, the unknown speaker is either accepted or rejected. The problem of SI is identifying a speaker from his/her voice from a given set of speakers without a claim being made by the test speaker.
The project involves segmenting textured images used in textile industries. Many conventional algorithms are available for segmentation but gabor segmentation is used. Gabor filter is a set of filter banks obtained at different orientation corresponding to the size of actual image texture. Each filter bank is convolved with the actual image to obtain features of the image.
The features obtained are almost same when different gabor filters are convolved with the original image. In order to make sure that the convolved image has the correct features, all the convolved matrices are averaged. The averages matrix is assigned class label based on the range of values predefined. The class label is in the form of alphabets. When the labeled matrix is viewed, it shows the segmented portion of the image. Subsequently, kronecker delta rule is applied too refine the segmentation process.
The nature of the project involves in taking an image texture as input and getting the partitioned or segmented textures as output. It can be very well justified in doing this project as automatic segmentation, identification and classification plays an important role in image processing. The future of the project is unlimited. There is lot of demand in segmentation of 3D objects, especially in magnetic resonance imaging, computer tomography.
Sequential steganography as those class of embedding algorithms that hide messages in consecutive (time, spatial or frequency domain) features of a host signal. The project presents a steganalysis method that estimates the secret key used in sequential steganography. A theory is developed for detecting abrupt jumps in the statistics of the stego signal during steganalysis. Stationary and non-stationary host signals with low, medium and high SNR embedding are considered. A locally most powerful steganalysis detector for the low SNR case is also derived. Several techniques to make the steganalysis algorithm work for non-stationary digital image steganalysis are also presented. Extensive experimental results are shown to illustrate the strengths and weaknesses of the proposed steganalysis algorithm.
The work aims at incorporating a concept of hiding message into another message. The includes hiding image into image, text into image, text into text, sound into sound and so on. A preferred image is taken as a standard for hiding secret images. Logic has been developed to hide message and unhide the message.
The steganography is possible among people who have hiding and unhiding message software. In addition to standard hiding procedure in steganography, different other combination of methods can be implemented. In addition, cryptography can also be used.
This project involves the development of security for software. The project involves key generation and key recovery procedures. These two procedures avoid pirating any newly developed marketable package. Key generation and key recovery software involves intelligent techniques to create secured keys. These keys play an important role in providing access to the licensed software.
During generation of keys Expiry date, Version and Ethernet number of the customer are taken into consideration and keys are developed. The keys are used as inputs to the artificial neural network. The outputs of artificial neural network are encoded as floating values that plays important role in avoiding piracy. In addition, two sets of final weight files are developed. Altogether, four files along with the required software will be provided to the customer by the vendor.
The customer will install all the software in the computer. During the execution of the software, the recovery program will convert the floating-point values into characters that are then reassembled to get Ethernet number and Expiry date. Subsequently a separate program will access the system date and existing Ethernet number. Comparison between the obtained and the existing Ethernet number is done. If the strings are matching then a check is made if the system date is less than expiry date of the original software. If everything matches then the execution of the software continues.
Object tracking is a challenging task in spite of all sophisticated methods that have been developed. The major challenge is to keep track of the object of a particular choice. In this work, a new video moving object-tracking method is proposed. The segmentation of the video is done by contextual clustering. The features of the segmented image is further processed by the imfeature properties of the matlab. The imfeature provides 24 properties. In this work, two important properties are used to process the features of the segmented image for highlighting the presence of the human. The co-ordinates of the human in the video are given as input for the Kalman filter. This process is further repeated for the successive video frames. . An artificial neural network with supervised backpropagation algorithm learns the input output scenarios of the Kalman filter and provides a better estimate of the movement of the human in the video frame. A multitarget human tracking is attempted.
Wireless communication is part and parcel of modern communication methods. Cell phones have become handy to everyone for instant communication. During such communication, clarity, continuity and less noise are the major parameters considered. To communicate between two or more cell phone users, a transceiver should be erected, called base station. The base station depending upon quality, capacity will decide the number of base stations in a given area. The base station is similar to a telephone exchange. What is the optimal number of base station required so that they can communicate with their service providers called nodes effectively without any redundancy or duplication or more number of base stations than that is required to satisfy all the service providers.
The desire to make computers think, as human being do, resulted in the development of Genetic algorithm. Genetic algorithm is one of the artificial intelligent methods which proves to be superior in many aspects when compared to other classical methods in solving character recognition problem. The solution generated out of training the algorithm is positive and definite.
Texture segmentation plays an important role in recognizing and identifying a material, type of characteristic for particular image. Wavelets are employed for the computation of single and multi-scale roughness features because of their ability to extract information at different resolutions. Features are extracted in multiple directions using directional wavelet obtained from partial derivative of Gaussian distribution function. The first and second derivative wavelets are used to obtain the features of the textured image at different orientations like 0ْ ,45ْ , 90ْ and 135ْ and scales such as 1, 2 and 4.
The feature extraction part consists of two stages, the extraction of directional roughness feature and the percentage of energy feature. The directional roughness features results in high quality texture segmentation performance, whereas the use of percentage of energy feature retains the important properties of fractal dimension based features like insensitivity to absolute illumination and contrast. The percentage energy feature computed using exponential wavelets is used for segmenting different features in a given image by using k-means algorithm. For classification purpose database has been created from different texture samples and classification is done using template matching technique.
The contourlet transform consists of two modules: the Laplacian Pyramid and the Directional Filter Bank. When both of them use perfect reconstruction filters, the contourlet expansion and reconstruction is a Perfect dual. Therefore, the contour let transform can be employed as a coding scheme. The contourlet coefficients derived above can be transmitted through the wireless channel in the same way as transmitting the original image, where the transmission is prone to noise and block loss. However, the reconstruction at the receiver performs differently if the image is transmitted directly or coded by the contourlet Transform. This paper studies the performance of the contour let coding in image recovery and denoising. The simulation results show that for general images the contour let transform is quite competitive to the wavelet transform in the SNR sense and in visual effect.
The quality of a board or a sheet of veneer determines its potential uses and the price for the saw mills. Automatic detection of the defects in the wood surface and grading of the products is one of the key interest areas in the mechanical wood industry In this work intelligent methods for knot recognition is implemented using Self organizing map. Defective wood is used as template. During training of SOM, the SOM learns defective wood. During testing of the processed a test picture is given as input that contains a combination of all defects
This work focuses on 3D facial modeling using three images of a face. Each image is taken at 90o. Each image is segmented to identify skin, to locate eye centres, nose profile, mouth profiles. All the three images are combined to form a 3D facial model. The segmented portions of the images are placed on a standard computer plastic model. Subsequently, the various features of the plastic model are animated corresponding to the various positions of the features in the subsequent images.