Distributed cache updating for the Dynamic source routing protocol
Abstract: On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.An Adaptive Programming Model for Fault-Tolerant Distributed Computing
Abstract: The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QOS) cannot always be delivered between processes. Providing fault tolerance for such dynamic environments is a challenging task. Considering such a context, this paper proposes an adaptive programming model for fault-tolerant distributed computing, which provides upper-layer applications with process state information according to the current system synchrony (or QOS). The underlying system model is hybrid, composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). However, such a composition can vary over time, and, in particular, the system may become totally asynchronous (e.g., when the underlying system QOS degrade) or totally synchronous. Moreover, processes are not required to share the same view of the system synchrony at a given time. To illustrate what can be done in this programming model and how to use it, the consensus problem is taken as a benchmark problem. This paper also presents an implementation of the model that relies on a negotiated quality of service (QOS) for communication channels.Face Recognition Using Laplacian faces
Abstract: The face recognition is a fairly controversial subject right now. A system such as this can recognize and track dangerous criminals and terrorists in a crowd, but some contend that it is an extreme invasion of privacy. The proponents of large-scale face recognition feel that it is a necessary evil to make our country safer. It could benefit the visually impaired and allow them to interact more easily with the environment. Also, a computer vision-based authentication system could be put in place to allow computer access or access to a specific room using face recognition. Another possible application would be to integrate this technology into an artificial intelligence system for more realistic interaction with humans.
We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacian faces are the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a known powerful technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).Predictive Job Scheduling in a Connection Limited System using Parallel Genetic Algorithm
Abstract: Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the scheduling technique used. Intelligence is the key factor which is lacking in the job scheduling techniques of today. Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics.
Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations. Here we assume that the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed number of jobs. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it. The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used. In this paper, we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences and an effective job scheduling is achieved as time progresses. Digital Image Processing Techniques for the Detection and Removal of Cracks in Digitized Paintings
Abstract: An integrated methodology for the detection and removal of cracks on digitized paintings is presented in this project. The cracks are detected by threshold the output of the morphological top-hat transform. Afterward, the thin dark brush strokes which have been misidentified as cracks are removed using either a median radial basis function neural network on hue and saturation data or a semi-automatic procedure based on region growing. Finally, crack filling using order statistics filters or controlled anisotropic diffusion is performed. The methodology has been shown to perform very well on digitized paintings suffering from cracks.A Distributed Database Architecture for Global Roaming in Next-Generation Mobile Networks
Abstract: The next-generation mobile network will support terminal mobility, personal mobility, and service provider portability, making global roaming seamless. A location-independent personal telecommunication number (PTN) scheme is conducive to implementing such a global mobile system. However, the non-geographic PTNs coupled with the anticipated large number of mobile users in future mobile networks may introduce very large centralized databases. This necessitates research into the design and performance of high-throughput database technologies used in mobile systems to ensure that future systems will be able to carry efficiently the anticipated loads. This paper proposes a scalable, robust, efficient location database architecture based on the location-independent PTNs. The proposed multi tree database architecture consists of a number of database subsystems, each of which is a three-level tree structure and is connected to the others only through its root. By exploiting the localized nature of calling and mobility patterns, the proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location registration and call delivery procedures. In addition, two memory-resident database indices, memory-resident direct file and T-tree, are proposed for the location databases to further improve their throughput. Analysis model and numerical results are presented to evaluate the efficiency of the proposed database architecture. Results have revealed that the proposed database architecture for location management can effectively support the anticipated high user density in the future mobile networks.
sir,will u plz snd me a detaild description of dis project??my id is azhar5266@gmail.com
ReplyDelete