PROJECT IDEAS |COMPUTER PROJECT IDEAS | ENGINEERING PROJECT | SCIENCE FAIR PROJECT IDEAS | SCHOOL PROJECTS | PROJECT DOWNLOAD | ELECTRONICS | MECHANICAL | IT
Thursday, October 15, 2009
Sunday, October 11, 2009
Free Souce code for java projects
- JSP Chart Application
Download
2. INTRANET MESSENGER SYSTEM
Description: This Project includes chatting, File Transferring, Net-Conference and Offline Message sending facilities. It is developed in Java, User Interface in Swings and Database issues in JDBC, MSAccess. By using this we can eliminate having the facility of Internet. If the organization is provided with Intranet Messenger System then each registered student and staff can send messages/file attachments and chat to/with any other registered employee.
Download
3. Web Time Tracker
Description: WebTimeTrack is an Intranet / Internet enterprise wide time sheet submission suite of applet and Java server application for Human Resource purposes to keep track of the employees time sheet in house or on site. Keeping time sheet has been a practice for many years simply because it gives a company a better overview of employees productivity and in general a balance between Human Resource budget and other expenditure in the company. Through the years this practice has been developed from paper logs to all sorts of punching card gadgets however the technology has brought us a step further so employees can now use the Intranet / Internet to log in their time sheet.
Download
4. Web Skeleton zing Servlet
Description: This servlet offers a simple little service: you give it a URL, and it returns a web page with some information about the page at that URL, and all the links and other references extracted from the page.
Download
Monday, October 5, 2009
New IEEE Topics
QUIVER: CONSISTENT OBJECT SHARING FOR EDGE SERVICES
Abstract: We present Quiver, a system that coordinates service proxies placed at the “edge” of the Internet to serve distributed clients accessing a service involving mutable objects. Quiver enables these proxies to perform consistent accesses to shared objects by migrating the objects to proxies performing operations on those objects. These migrations dramatically improve performance when operations involving an object exhibit geographic locality, since migrating this object into the vicinity of proxies hosting these operations will benefit all such operations. This system reduces the workload in the server. It performs the all operations in the proxies itself. In this system the operations performed in First-In-First-Out process. This system handles two process serializability and strict stabilizability for durability in the consistent object sharing . Other workloads benefit from Quiver, dispersing the computation load across the proxies and saving the costs of sending operation parameters over the wide area when these are large. Quiver also supports optimizations for single-object reads that do not involve migrating the object. We detail the protocols for implementing object operations and for accommodating the addition, involuntary disconnection, and voluntary departure of proxies. Finally, we discuss the use of Quiver to build an e-commerce application and a distributed network traffic modeling service.
MINING FILE DOWNLOADING TIME IN STOCHASTIC PEER TO PEER NETWORKS
Abstract: On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. INTRUSION DETECTION IN HOMOGENEOUS & HETEROGENEOUS WIRELESS SENSOR NETWORKS
WATER MARKING RELATIONAL DATABASE USING OPTIMIZATION BASED TECHNIQUES
Abstract: Proving ownerships rights on outsourced relational database is a crucial issue in today's internet based application environments and in many content distribution applications. In this paper, we present a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data. We formulate the watermarking of relational databases as a constrained optimization problem and discus efficient techniques to solve the optimization problem and to handle the on straints. Our watermarking technique is resilient to watermark synchronization errors because it uses a partitioning approach that does not require marker tuple. Our approach overcomes a major weakness in previously proposed watermarking techniques. Watermark decoding is based on a threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors. We implemented a proof of concept implementation of our watermarking technique and showed by experimental results that our technique is resilient to tuple deletion, alteration and insertion attacks.
A SIGNATURE BASED INDEXING METHOD FOR EFFICIENT CONTENT BASED RETRIEVAL OF RELATIVE TEMPORAL PATTERNS
Abstract: Project aims for efficient content based retrieval process of relative temporal pattern using signature based indexing method. Rule discovery algorithms in data mining generate a large number of patterns/rules, sometimes even exceeding the size of the underlying database, with only a small fraction being of interest to the user. It is generally understood that interpreting the discovered patterns/rules to gain insight into the domain is an important phase in the knowledge discovery process. However, when there are a large number of generated rules, identifying and analyzing those that are interesting becomes difficult. We address the problem of efficiently retrieving subsets of a large collection of previously discovered temporal patterns. When processing queries on a small database of temporal patterns, sequential scanning of the patterns followed by straightforward computations of query conditions is sufficient. However, as the database grows, this procedure can be too slow, and indexes should be built to speed up the queries. The problem is to determine what types of indexes are suitable for improving the speed of queries involving the content of temporal patterns. We propose a system with signature-based indexing method to speed up content-based queries on temporal patterns And It’s used to optimize the storage and retrieval of a large collection of relative temporal patterns. The use of signature files improves the performance of temporal pattern retrieval. This retrieval system is currently being combined with visualization techniques for monitoring the behavior of a single pattern or a group of patterns over time.
TRUTH DISCOVERY WITH MULTIPLE CONFLICTING INFORMATION PROVIDERS ON WEB
Abstract: The world-wide web has become the most important information source for most of us. Unfortunately, there is no guarantee for the correctness of information on the web. Moreover, different web sites often provide conflicting in-formation on a subject, such as different specifications for the same product. In this paper we propose a new problem called Veracity that is conformity to truth, which studies how to find true facts from a large amount of conflicting information on many subjects that is provided by various web sites. We design a general framework for the Veracity problem, and invent an algorithm called Truth Finder, which utilizes the relationships between web sites and their information, i.e., a web site is trustworthy if it provides many pieces of true information, and a piece of information is likely to be true if it is provided by many trustworthy web sites. Our experiments show that Truth Finder successfully finds true facts among conflicting information, and identifies trustworthy web sites better than the popular search engines. TRUST WORTHY COMUTING UNDER RESOURCE CONSTRAINTS WITH THE DOWN POLICY
CREDIT CARD FRAUD DETECTION USING HIDDEN MARKOV MODELS
Abstract: Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.
Sunday, October 4, 2009
IEEE Project Topics
Distributed cache updating for the Dynamic source routing protocol
Abstract: On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.An Adaptive Programming Model for Fault-Tolerant Distributed Computing
Abstract: The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QOS) cannot always be delivered between processes. Providing fault tolerance for such dynamic environments is a challenging task. Considering such a context, this paper proposes an adaptive programming model for fault-tolerant distributed computing, which provides upper-layer applications with process state information according to the current system synchrony (or QOS). The underlying system model is hybrid, composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). However, such a composition can vary over time, and, in particular, the system may become totally asynchronous (e.g., when the underlying system QOS degrade) or totally synchronous. Moreover, processes are not required to share the same view of the system synchrony at a given time. To illustrate what can be done in this programming model and how to use it, the consensus problem is taken as a benchmark problem. This paper also presents an implementation of the model that relies on a negotiated quality of service (QOS) for communication channels.Face Recognition Using Laplacian faces
Abstract: The face recognition is a fairly controversial subject right now. A system such as this can recognize and track dangerous criminals and terrorists in a crowd, but some contend that it is an extreme invasion of privacy. The proponents of large-scale face recognition feel that it is a necessary evil to make our country safer. It could benefit the visually impaired and allow them to interact more easily with the environment. Also, a computer vision-based authentication system could be put in place to allow computer access or access to a specific room using face recognition. Another possible application would be to integrate this technology into an artificial intelligence system for more realistic interaction with humans.
We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacian faces are the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a known powerful technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).Predictive Job Scheduling in a Connection Limited System using Parallel Genetic Algorithm
Abstract: Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the scheduling technique used. Intelligence is the key factor which is lacking in the job scheduling techniques of today. Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics.
Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations. Here we assume that the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed number of jobs. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it. The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used. In this paper, we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences and an effective job scheduling is achieved as time progresses. Digital Image Processing Techniques for the Detection and Removal of Cracks in Digitized Paintings
Abstract: An integrated methodology for the detection and removal of cracks on digitized paintings is presented in this project. The cracks are detected by threshold the output of the morphological top-hat transform. Afterward, the thin dark brush strokes which have been misidentified as cracks are removed using either a median radial basis function neural network on hue and saturation data or a semi-automatic procedure based on region growing. Finally, crack filling using order statistics filters or controlled anisotropic diffusion is performed. The methodology has been shown to perform very well on digitized paintings suffering from cracks.A Distributed Database Architecture for Global Roaming in Next-Generation Mobile Networks
Abstract: The next-generation mobile network will support terminal mobility, personal mobility, and service provider portability, making global roaming seamless. A location-independent personal telecommunication number (PTN) scheme is conducive to implementing such a global mobile system. However, the non-geographic PTNs coupled with the anticipated large number of mobile users in future mobile networks may introduce very large centralized databases. This necessitates research into the design and performance of high-throughput database technologies used in mobile systems to ensure that future systems will be able to carry efficiently the anticipated loads. This paper proposes a scalable, robust, efficient location database architecture based on the location-independent PTNs. The proposed multi tree database architecture consists of a number of database subsystems, each of which is a three-level tree structure and is connected to the others only through its root. By exploiting the localized nature of calling and mobility patterns, the proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location registration and call delivery procedures. In addition, two memory-resident database indices, memory-resident direct file and T-tree, are proposed for the location databases to further improve their throughput. Analysis model and numerical results are presented to evaluate the efficiency of the proposed database architecture. Results have revealed that the proposed database architecture for location management can effectively support the anticipated high user density in the future mobile networks..NET Project Topics
- Speech comparison using neural networks VB.NET
- Finger print based employee attendance system VB.NET / SQL/ Hardware
- Distributed mobility management for target tracking in mobile sensor networks VB.NET
- Efficient query processing in peer to-peer networks VB.NET / SQL
- Character Recognition System VB.NET
- Xml code editor with syntax checker and syntax coloring VB.NET
- Efficient broadcasting in mobile ad hoc networks VB.NET
- Web Analytics ASP.NET / SQL / Javascript
- Intrusion Detection System / Firewall VB.NET
- Home Automation System with Mobile Automation through GSM VB.NET / Hardware
- Desktop Payroll Application VB.NET / SQL
- Voice/ Speech based Browser VB.NET
- Steganography for audio, video, image VB.NET
- Windows Desktop Search VB.NET
- Network Health/Node monitoring [Six Sigma Implementation] VB.NET / SQL
- Secured Software & Authentication VB.NET
- Securing TCP/IP communication using cryptography VB.NET
- File protector with password protection and cryptography security VB.NET
- Implementation of Digital image processing techniques VB.NET
- Perceptual color correction through Variational techniques VB.NET
- Distributed database architecture VB.NET / SQL
- Alert based monitoring of stock trading systems VB.NET/ ASP.NET / SQL
- CASE Tools VB.NET/ ASP.NET / SQL
- Measuring the quality of software modularization VB.NET / SQL
- Mobile- Commerce ASP.NET / SQL
- Mobile-CRM ASP.NET / SQL
- Generic project management portal ASP.NET / SQL
- Web-based recruitment process system ASP.NET / SQL
- Online discussion-forum ASP.NET / SQL
- Equity trading portfolio manager ASP.NET / SQL
- Online Job search portal ASP.NET / SQL
- Online matrimonial portal ASP.NET / SQL
- TorrEx - a Bit Torrent Client VB.NET
- GIS based Routing VB.NET / SQL
- Digital Watermarking VB.NET
- Bluetooth Messenger VB.NET
- Financial forecast system VB.NET / SQL
- Intranet application for multiclient chatting VB.NET
- Mobile- Online Examination Sysetm ASP.NET / SQL
- Development of a simple IP subnet calculator tool VB.NET
- IP based process monitor VB.NET
- Online webmart system for jewellery ASP.NET / SQL
- Online bug tracking and customer support system ASP.NET / SQL
- Intranet Mail System VB.NET / SQL
- Online Admission Management System ASP.NET / SQL
- Online Examination System ASP.NET / SQL
- Attendance & Leave Mgmt System ASP.NET / SQL
- Home Automation System with Speech / Internet/ LAN VB.NET / Hardware
- Medical Diagnosis System VB.NET / SQL
- Wireless Data transfer to mobile VB.NET
- Asset Management VB.NET / SQL
- Remote Desktop VB.NET
- Essential task administration gateway VB.NET
- Portfolio manager VB.NET / SQL
- Effective port scanner and detector VB.NET
- Automated backup and recovery scheduler VB.NET
- Network problem notification via SMS VB.NET / SQL
- Bluetooth based employee attendance system VB.NET / SQL/ Hardware
- Remote explorer and task manager VB.NET
- Online Ticket Reservation System ASP.NET / SQL
- Human Resource Management ASP.NET / SQL
- Construction Management / Tracking System ASP.NET / SQL
- Hotel Management Sys ASP.NET / SQL
- Home Automation System with Speech Recognition VB.NET / Hardware
- Hospital management system VB.NET / SQL
- Customer Query Management System VB.NET / SQL
- ICS (Inventory Control System) VB.NET / SQL
- Library Mgmt VB.NET / SQL
- Medical Store Mgmt VB.NET / SQL
Java Project Ideas
1.3D RSS Aggregator
Abstract
This graduate project describes a three-dimensional RSS Aggregator and visualization tool that is used to explore and display xml results from any standard RSS source. This tool provides an alternate representation of the conventional aggregators and allows three-dimensional navigation through the aggregated data. This tool is to be developed using Java2D and its three-dimensional extension API called Java 3D. The tool allows different customization options such as change the shape of objects in the world, and their colour. The design and implementation of the tool is discussed and further extensions are provided.
Software Specifications:
- Java J2SE
- Windows XP/98
- Apache Server
- Java Servlets
- Apache Commons Net Library
- Java Processing
Existing System:
The conventional aggregators are text based and 2D in nature. Since RSS aggregator tools are being increasingly used these days, there is a constant need to design new ways to visualize the information (feeds) obtained from internet.
RSS (most commonly translated as "Really Simple Syndication" but sometimes "Rich Site Summary") is a family of web feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format. An RSS document (which is called a "feed", "web feed", or "channel") includes full or summarized text, plus metadata such as publishing dates and authorship. Web feeds benefit publishers by letting them syndicate content automatically. They benefit readers who want to subscribe to timely updates from specific websites or to aggregate feeds from many sites into one place. RSS feeds can be read using software called an "RSS reader", "feed reader", or "aggregator", which can be web-based, desktop-based, or mobile-device-based. A standardized XML file format allows the information to be published once and viewed by many different programs. The user subscribes to a feed by entering into the reader the feed's URI – often referred to informally as a "URL" (uniform resource locator), although technically the two terms are not exactly synonymous – or by clicking an RSS icon in a browser that initiates the subscription process. The RSS reader checks the user's subscribed feeds regularly for new work, downloads any updates that it finds, and provides a user interface to monitor and read the feeds.
Aggregator
In computing, a feed aggregator, also known as a feed reader, news reader or simply aggregator, is client software or a Web application which aggregates syndicated web content such as news headlines, blogs, podcasts, and vlogs in a single location for easy viewing.
2. A Cooperative Internet Backup Scheme
Traditional data backup techniques work by writing backup data to removable media, which is then taken off-site to a secure location. For example, a server might write its backup data daily onto tape using an attached tape drive; at the end of each week, the resulting tapes would then be picked up by a truck and driven to a guarded warehouse. The main drawback of these techniques is the inconvenience for system owners of managing the media and transferring it off-site, especially for small installations and PC owners.
In contrast, Internet backup sites (e.g., www.backuphelp.com), avoid this inconvenience by locating the tape or other media drive in the warehouse itself and by using the Internet instead of a truck to transfer the backup data. Customers need only install the supplied backup software to be assured that, so long as their system remains connected to the Internet, their data will be automatically backed up daily1 without any further action on their part. These sites charge by the month based on the amount of data being backed up. For example,
a typical fee today to backup up one gigabyte of data is fifty US dollars a month. In this paper we propose a new Internet-based backup technique that appears to be one to two orders of magnitude cheaper than existing Internet backup services. Instead of relying on a central warehouse holding removable media, we use a decentralized peer-to-peer scheme that stores backup data on the participating computers’ hard drives.
3.Instant
Feeling Messages
The idea is to build a visualization service for
incoming mails/sms messages. The software would allow capturing, storing and
sharing of fleeting emotional experiences. Based on the Cognitive Priming
theory, as we become more immersed in digital media through internet, our
personal media inventories constantly act as memory aids, “priming” us to
better recollect associative, personal (episodic) memories when facing an
external stimulus. Being in a dynamic environment, these recollections are
moving, both emotionally and quickly away from us. Counting on the fact that
near-today’s personal media inventories will be accessed from computer and
shared with a close collective, the software bundles text, sound and image
animation to allow capturing these fleeting emotional experiences, then sharing
and reliving them with cared others. Playfully stemming from the technical,
thin jargon of the message world (SMS, Email, RSS Feeds), the project proposes
a new, light format of instant messages, dubbed “IFM”- Instant Feeling
Messages.
4.MAGIC:Multi tArget Graphical user InterfaCe
To work with a system,
the users need to be able to control the system and assess the state of the
system. Graphical user interfaces (GUI) accept input via devices such as
computer keyboard and mouse and provide articulated graphical output on the
computer monitor. There are at least two different principles widely used in
GUI design: Object-oriented user interfaces (OOUIs) and application oriented
interfaces. The graphical user interface is a computer interface that uses
graphic icons and controls in addition to text. The user of the computer
utilizes a pointing device, like a mouse, to manipulate these icons and
controls. This is considerably different from the command line interface (CLI)
in which the user types a series of text commands to the computer. Many
Operating systems graduated from the console based interfaces to GUI after
finding then more acceptable to the end users. Today also designing a good GUI
in a widely acceptable language is not an easy task which is made more
difficult by perplexing programming constructs which these languages provide.
We plan to design a
simple user language with easy to understand constructs for designing a user
interface. But, of course the user will not like only the GUI to be in a
language other than the language in which he is developing an application. To
overcome this issue we plan to implement a compiler, to be written in Java,
which will combine this new language to a target language such as Java. Thus a
user will get the code for the GUI he is designing in a high level language. We
also plan to provide an IDE for writing the new language and for compiling it
to the target language.
Subscribe to:
Posts (Atom)